Test Report: Docker_Windows 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (34/427)

Order failed test Duration
67 TestErrorSpam/setup 51.82
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 518.31
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 374.23
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 53.59
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 54.3
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 53.66
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 741.38
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 54.45
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 20.2
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 5.41
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 122.39
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 242.86
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 22.5
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 52.68
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 20.2
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.1
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.47
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.48
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.5
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.49
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.48
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell 2.87
360 TestKubernetesUpgrade 846.93
406 TestStartStop/group/no-preload/serial/FirstStart 528.21
420 TestStartStop/group/newest-cni/serial/FirstStart 516.44
447 TestStartStop/group/no-preload/serial/DeployApp 6.76
448 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 88.43
457 TestStartStop/group/no-preload/serial/SecondStart 378.53
459 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 123.87
469 TestStartStop/group/newest-cni/serial/SecondStart 380.49
490 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.56
506 TestStartStop/group/newest-cni/serial/Pause 9.72
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 256.43
x
+
TestErrorSpam/setup (51.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-107300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-107300 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 --driver=docker: (51.8172499s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-107300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22128
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-107300" primary control-plane node in "nospam-107300" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "nospam-107300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (51.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (518.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0
E1213 08:49:46.000139    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.661125    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.668285    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.680253    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.701821    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.744360    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.826527    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:36.989220    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:37.311435    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:37.953327    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:39.235966    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:41.798253    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:46.920072    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:57.162666    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:17.644831    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:58.607666    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:54:46.001112    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:55:20.530719    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:56:09.072771    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m35.4795452s)

                                                
                                                
-- stdout --
	* [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:63834
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:63834
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000508197s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00054602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00054602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 6 (627.5494ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 08:57:25.148088    8292 status.go:458] kubeconfig endpoint: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.2459466s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-213400 image rm kicbase/echo-server:functional-213400 --alsologtostderr                                      │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image save --daemon kicbase/echo-server:functional-213400 --alsologtostderr                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ service        │ functional-213400 service hello-node --url --format={{.IP}}                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ service        │ functional-213400 service hello-node --url                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ addons         │ functional-213400 addons list                                                                                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ addons         │ functional-213400 addons list -o json                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ license        │                                                                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-213400 --alsologtostderr -v=1                                                          │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format short --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh            │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image          │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete         │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start          │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:48:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:48:49.068162   13912 out.go:360] Setting OutFile to fd 740 ...
	I1213 08:48:49.111749   13912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:48:49.111749   13912 out.go:374] Setting ErrFile to fd 1804...
	I1213 08:48:49.111749   13912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:48:49.125311   13912 out.go:368] Setting JSON to false
	I1213 08:48:49.127314   13912 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1536,"bootTime":1765614192,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:48:49.127314   13912 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:48:49.132929   13912 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:48:49.136489   13912 notify.go:221] Checking for updates...
	I1213 08:48:49.137493   13912 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:48:49.140479   13912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:48:49.144652   13912 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:48:49.146884   13912 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:48:49.148865   13912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:48:49.151507   13912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:48:49.268373   13912 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:48:49.271653   13912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:48:49.502347   13912 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-13 08:48:49.480798405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:48:49.506785   13912 out.go:179] * Using the docker driver based on user configuration
	I1213 08:48:49.511902   13912 start.go:309] selected driver: docker
	I1213 08:48:49.511902   13912 start.go:927] validating driver "docker" against <nil>
	I1213 08:48:49.511902   13912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:48:49.598272   13912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:48:49.824111   13912 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-13 08:48:49.802488128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:48:49.824111   13912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:48:49.825190   13912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:48:49.827723   13912 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 08:48:49.829695   13912 cni.go:84] Creating CNI manager for ""
	I1213 08:48:49.829695   13912 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:48:49.829695   13912 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1213 08:48:49.829695   13912 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	W1213 08:48:49.829695   13912 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	I1213 08:48:49.829695   13912 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:48:49.833412   13912 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:48:49.839450   13912 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:48:49.841506   13912 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:48:49.845695   13912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:48:49.845695   13912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:48:49.845695   13912 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:48:49.845695   13912 cache.go:65] Caching tarball of preloaded images
	I1213 08:48:49.846700   13912 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:48:49.846700   13912 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:48:49.846700   13912 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:48:49.846700   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json: {Name:mk9403eca5181fe78560a3295157db36bbf094cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:48:49.921840   13912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:48:49.921840   13912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:48:49.921840   13912 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:48:49.921840   13912 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:48:49.922405   13912 start.go:364] duration metric: took 524.7µs to acquireMachinesLock for "functional-482100"
	I1213 08:48:49.922437   13912 start.go:93] Provisioning new machine with config: &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:48:49.922437   13912 start.go:125] createHost starting for "" (driver="docker")
	I1213 08:48:49.925656   13912 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 08:48:49.925656   13912 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	W1213 08:48:49.925656   13912 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:63834 to docker env.
	I1213 08:48:49.925656   13912 start.go:159] libmachine.API.Create for "functional-482100" (driver="docker")
	I1213 08:48:49.925656   13912 client.go:173] LocalClient.Create starting
	I1213 08:48:49.926286   13912 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 08:48:49.926286   13912 main.go:143] libmachine: Decoding PEM data...
	I1213 08:48:49.926286   13912 main.go:143] libmachine: Parsing certificate...
	I1213 08:48:49.926286   13912 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 08:48:49.926932   13912 main.go:143] libmachine: Decoding PEM data...
	I1213 08:48:49.926932   13912 main.go:143] libmachine: Parsing certificate...
	I1213 08:48:49.931645   13912 cli_runner.go:164] Run: docker network inspect functional-482100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 08:48:49.987455   13912 cli_runner.go:211] docker network inspect functional-482100 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 08:48:49.992668   13912 network_create.go:284] running [docker network inspect functional-482100] to gather additional debugging logs...
	I1213 08:48:49.992668   13912 cli_runner.go:164] Run: docker network inspect functional-482100
	W1213 08:48:50.052792   13912 cli_runner.go:211] docker network inspect functional-482100 returned with exit code 1
	I1213 08:48:50.052792   13912 network_create.go:287] error running [docker network inspect functional-482100]: docker network inspect functional-482100: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-482100 not found
	I1213 08:48:50.052792   13912 network_create.go:289] output of [docker network inspect functional-482100]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-482100 not found
	
	** /stderr **
	I1213 08:48:50.055789   13912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:48:50.118731   13912 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001833950}
	I1213 08:48:50.118731   13912 network_create.go:124] attempt to create docker network functional-482100 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 08:48:50.121769   13912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-482100 functional-482100
	I1213 08:48:50.253811   13912 network_create.go:108] docker network functional-482100 192.168.49.0/24 created
	I1213 08:48:50.253811   13912 kic.go:121] calculated static IP "192.168.49.2" for the "functional-482100" container
	I1213 08:48:50.262465   13912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 08:48:50.319827   13912 cli_runner.go:164] Run: docker volume create functional-482100 --label name.minikube.sigs.k8s.io=functional-482100 --label created_by.minikube.sigs.k8s.io=true
	I1213 08:48:50.383615   13912 oci.go:103] Successfully created a docker volume functional-482100
	I1213 08:48:50.386998   13912 cli_runner.go:164] Run: docker run --rm --name functional-482100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-482100 --entrypoint /usr/bin/test -v functional-482100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 08:48:51.799526   13912 cli_runner.go:217] Completed: docker run --rm --name functional-482100-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-482100 --entrypoint /usr/bin/test -v functional-482100:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4125211s)
	I1213 08:48:51.800521   13912 oci.go:107] Successfully prepared a docker volume functional-482100
	I1213 08:48:51.800521   13912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:48:51.800521   13912 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 08:48:51.803994   13912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-482100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 08:49:06.742873   13912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-482100:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (14.938802s)
	I1213 08:49:06.742873   13912 kic.go:203] duration metric: took 14.9422747s to extract preloaded images to volume ...
	I1213 08:49:06.746701   13912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:49:06.981040   13912 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-13 08:49:06.958973111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:49:06.984802   13912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 08:49:07.220009   13912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-482100 --name functional-482100 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-482100 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-482100 --network functional-482100 --ip 192.168.49.2 --volume functional-482100:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 08:49:07.902334   13912 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Running}}
	I1213 08:49:07.959461   13912 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:49:08.012457   13912 cli_runner.go:164] Run: docker exec functional-482100 stat /var/lib/dpkg/alternatives/iptables
	I1213 08:49:08.118053   13912 oci.go:144] the created container "functional-482100" has a running status.
	I1213 08:49:08.119048   13912 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa...
	I1213 08:49:08.178588   13912 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 08:49:08.252542   13912 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:49:08.309523   13912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 08:49:08.309523   13912 kic_runner.go:114] Args: [docker exec --privileged functional-482100 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 08:49:08.428531   13912 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa...
	I1213 08:49:10.546144   13912 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:49:10.601340   13912 machine.go:94] provisionDockerMachine start ...
	I1213 08:49:10.604342   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:10.670506   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:10.685506   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:10.685506   13912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:49:10.867831   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:49:10.867831   13912 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:49:10.872902   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:10.930316   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:10.930316   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:10.930316   13912 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:49:11.123598   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:49:11.127788   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:11.185028   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:11.185028   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:11.185028   13912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:49:11.370237   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:49:11.370237   13912 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:49:11.370237   13912 ubuntu.go:190] setting up certificates
	I1213 08:49:11.370237   13912 provision.go:84] configureAuth start
	I1213 08:49:11.374359   13912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:49:11.430529   13912 provision.go:143] copyHostCerts
	I1213 08:49:11.430529   13912 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:49:11.430529   13912 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:49:11.431167   13912 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:49:11.431709   13912 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:49:11.431709   13912 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:49:11.432290   13912 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:49:11.433044   13912 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:49:11.433044   13912 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:49:11.433104   13912 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:49:11.433954   13912 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:49:11.567304   13912 provision.go:177] copyRemoteCerts
	I1213 08:49:11.571304   13912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:49:11.574302   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:11.627411   13912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:49:11.765811   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:49:11.792573   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:49:11.817309   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:49:11.844386   13912 provision.go:87] duration metric: took 474.1464ms to configureAuth
	I1213 08:49:11.844386   13912 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:49:11.845118   13912 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:49:11.848545   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:11.903628   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:11.903744   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:11.903744   13912 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:49:12.090894   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:49:12.090894   13912 ubuntu.go:71] root file system type: overlay
	I1213 08:49:12.090894   13912 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:49:12.094373   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:12.148406   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:12.148853   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:12.148922   13912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:49:12.343524   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:49:12.346976   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:12.403164   13912 main.go:143] libmachine: Using SSH client type: native
	I1213 08:49:12.403164   13912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:49:12.403164   13912 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:49:13.850872   13912 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 08:49:12.333882499 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 08:49:13.850872   13912 machine.go:97] duration metric: took 3.2495143s to provisionDockerMachine
	I1213 08:49:13.850872   13912 client.go:176] duration metric: took 23.9250907s to LocalClient.Create
	I1213 08:49:13.850872   13912 start.go:167] duration metric: took 23.9250907s to libmachine.API.Create "functional-482100"
	I1213 08:49:13.850872   13912 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:49:13.850872   13912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:49:13.854904   13912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:49:13.858524   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:13.913822   13912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:49:14.050049   13912 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:49:14.057751   13912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:49:14.057751   13912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:49:14.057751   13912 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:49:14.058726   13912 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:49:14.058726   13912 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:49:14.058726   13912 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:49:14.064221   13912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:49:14.077575   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:49:14.109181   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:49:14.138828   13912 start.go:296] duration metric: took 287.9546ms for postStartSetup
	I1213 08:49:14.144326   13912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:49:14.196772   13912 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:49:14.202512   13912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:49:14.206067   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:14.261229   13912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:49:14.384579   13912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:49:14.394622   13912 start.go:128] duration metric: took 24.4720569s to createHost
	I1213 08:49:14.394622   13912 start.go:83] releasing machines lock for "functional-482100", held for 24.4720886s
	I1213 08:49:14.397289   13912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:49:14.456736   13912 out.go:179] * Found network options:
	I1213 08:49:14.458642   13912 out.go:179]   - HTTP_PROXY=localhost:63834
	W1213 08:49:14.462530   13912 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 08:49:14.465265   13912 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 08:49:14.469316   13912 out.go:179]   - HTTP_PROXY=localhost:63834
	I1213 08:49:14.474652   13912 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:49:14.478488   13912 ssh_runner.go:195] Run: cat /version.json
	I1213 08:49:14.478488   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:14.481041   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:14.530721   13912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:49:14.530721   13912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	W1213 08:49:14.648768   13912 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:49:14.653480   13912 ssh_runner.go:195] Run: systemctl --version
	I1213 08:49:14.667342   13912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:49:14.675023   13912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:49:14.681451   13912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:49:14.734982   13912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 08:49:14.734982   13912 start.go:496] detecting cgroup driver to use...
	I1213 08:49:14.734982   13912 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:49:14.734982   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 08:49:14.749249   13912 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:49:14.749249   13912 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:49:14.766132   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:49:14.783356   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:49:14.799255   13912 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:49:14.803199   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:49:14.822290   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:49:14.842193   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:49:14.861478   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:49:14.879143   13912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:49:14.898419   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:49:14.918224   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:49:14.938020   13912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:49:14.960301   13912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:49:14.979476   13912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:49:14.995539   13912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:49:15.130391   13912 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:49:15.276813   13912 start.go:496] detecting cgroup driver to use...
	I1213 08:49:15.276813   13912 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:49:15.281108   13912 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:49:15.304763   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:49:15.328149   13912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:49:15.390130   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:49:15.412619   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:49:15.430406   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:49:15.455697   13912 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:49:15.466559   13912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:49:15.479568   13912 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:49:15.504662   13912 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:49:15.638356   13912 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:49:15.773849   13912 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:49:15.773849   13912 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:49:15.799515   13912 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:49:15.822315   13912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:49:15.958057   13912 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:49:16.810269   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:49:16.832333   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:49:16.854110   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:49:16.876728   13912 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:49:17.031506   13912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:49:17.171183   13912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:49:17.310337   13912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:49:17.336653   13912 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:49:17.358910   13912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:49:17.491660   13912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:49:17.596414   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:49:17.614095   13912 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:49:17.618099   13912 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:49:17.625693   13912 start.go:564] Will wait 60s for crictl version
	I1213 08:49:17.630289   13912 ssh_runner.go:195] Run: which crictl
	I1213 08:49:17.642094   13912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:49:17.681013   13912 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:49:17.684535   13912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:49:17.725206   13912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:49:17.762483   13912 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:49:17.766168   13912 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:49:17.892001   13912 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:49:17.895024   13912 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:49:17.904164   13912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:49:17.925605   13912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:49:17.979384   13912 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:49:17.979384   13912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:49:17.985265   13912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:49:18.020404   13912 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:49:18.020433   13912 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:49:18.023796   13912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:49:18.052515   13912 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:49:18.052515   13912 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:49:18.052515   13912 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:49:18.052515   13912 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:49:18.055995   13912 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:49:18.129842   13912 cni.go:84] Creating CNI manager for ""
	I1213 08:49:18.129933   13912 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:49:18.129933   13912 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:49:18.129933   13912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:49:18.130073   13912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:49:18.133694   13912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:49:18.145540   13912 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:49:18.150808   13912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:49:18.163021   13912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:49:18.183871   13912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:49:18.203157   13912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:49:18.226904   13912 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:49:18.233541   13912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:49:18.252659   13912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:49:18.393481   13912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:49:18.413714   13912 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:49:18.413714   13912 certs.go:195] generating shared ca certs ...
	I1213 08:49:18.413714   13912 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.415144   13912 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:49:18.415144   13912 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:49:18.415977   13912 certs.go:257] generating profile certs ...
	I1213 08:49:18.416256   13912 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:49:18.416349   13912 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.crt with IP's: []
	I1213 08:49:18.497344   13912 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.crt ...
	I1213 08:49:18.497344   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.crt: {Name:mk586593d4a1b872371d1d73158843ffaaacb80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.498344   13912 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key ...
	I1213 08:49:18.498344   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key: {Name:mk49612bd9fd6e75df7c960ae101c225ea1af9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.499340   13912 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:49:18.499340   13912 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt.13621831 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 08:49:18.524632   13912 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt.13621831 ...
	I1213 08:49:18.524632   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt.13621831: {Name:mk803dab43e9f8ad54b2060d6fe74f7b02769871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.525632   13912 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831 ...
	I1213 08:49:18.525632   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831: {Name:mk33e7c8c39ed813814ae9a4882c281050fcbaeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.526635   13912 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt.13621831 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt
	I1213 08:49:18.540656   13912 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key
	I1213 08:49:18.540656   13912 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:49:18.540656   13912 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt with IP's: []
	I1213 08:49:18.642732   13912 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt ...
	I1213 08:49:18.642732   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt: {Name:mk49cfc4c8616eb50c3313d1ff289d2af81d11c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.643745   13912 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key ...
	I1213 08:49:18.643745   13912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key: {Name:mk21581ed608fc6390bf59f70a4f8c6f8bd5a5e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:49:18.657730   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:49:18.657730   13912 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:49:18.657730   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:49:18.657730   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:49:18.657730   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:49:18.657730   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:49:18.658734   13912 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:49:18.658734   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:49:18.689935   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:49:18.714846   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:49:18.740469   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:49:18.769300   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:49:18.794390   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:49:18.822233   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:49:18.850205   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:49:18.874561   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:49:18.905362   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:49:18.933344   13912 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:49:18.957799   13912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:49:18.983719   13912 ssh_runner.go:195] Run: openssl version
	I1213 08:49:18.997763   13912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:49:19.013128   13912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:49:19.030485   13912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:49:19.039131   13912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:49:19.043200   13912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:49:19.091834   13912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:49:19.110492   13912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 08:49:19.131557   13912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:49:19.148284   13912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:49:19.169551   13912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:49:19.178653   13912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:49:19.183043   13912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:49:19.232129   13912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:49:19.249674   13912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 08:49:19.265549   13912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:49:19.281883   13912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:49:19.297764   13912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:49:19.304919   13912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:49:19.310771   13912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:49:19.360844   13912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:49:19.377864   13912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 08:49:19.396086   13912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:49:19.404662   13912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 08:49:19.404662   13912 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:49:19.408142   13912 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:49:19.440107   13912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:49:19.457511   13912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:49:19.472294   13912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:49:19.476208   13912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:49:19.490213   13912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:49:19.490213   13912 kubeadm.go:158] found existing configuration files:
	
	I1213 08:49:19.495237   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:49:19.510504   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:49:19.514668   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:49:19.534381   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:49:19.547460   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:49:19.552927   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:49:19.573917   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:49:19.588192   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:49:19.593966   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:49:19.611494   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:49:19.624296   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:49:19.628085   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:49:19.646501   13912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:49:19.759145   13912 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 08:49:19.840944   13912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 08:49:19.939675   13912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:53:21.546085   13912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 08:53:21.546187   13912 kubeadm.go:319] 
	I1213 08:53:21.546522   13912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 08:53:21.548331   13912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:53:21.548331   13912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:53:21.548902   13912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:53:21.548902   13912 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 08:53:21.548902   13912 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 08:53:21.548902   13912 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 08:53:21.548902   13912 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 08:53:21.548902   13912 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 08:53:21.548902   13912 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 08:53:21.549501   13912 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 08:53:21.549501   13912 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 08:53:21.549712   13912 kubeadm.go:319] CONFIG_INET: enabled
	I1213 08:53:21.549810   13912 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 08:53:21.549835   13912 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 08:53:21.549835   13912 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 08:53:21.549835   13912 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 08:53:21.549835   13912 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 08:53:21.549835   13912 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 08:53:21.550414   13912 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 08:53:21.550983   13912 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 08:53:21.550983   13912 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 08:53:21.550983   13912 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 08:53:21.550983   13912 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 08:53:21.550983   13912 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 08:53:21.550983   13912 kubeadm.go:319] OS: Linux
	I1213 08:53:21.550983   13912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:53:21.551515   13912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:53:21.551515   13912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:53:21.551636   13912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:53:21.551799   13912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:53:21.551851   13912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:53:21.552015   13912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:53:21.552070   13912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:53:21.552167   13912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:53:21.552320   13912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:53:21.552436   13912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:53:21.552436   13912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:53:21.552436   13912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:53:21.556435   13912 out.go:252]   - Generating certificates and keys ...
	I1213 08:53:21.556478   13912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:53:21.556478   13912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:53:21.556478   13912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 08:53:21.556478   13912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 08:53:21.556478   13912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 08:53:21.557128   13912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 08:53:21.557282   13912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 08:53:21.557373   13912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:53:21.557373   13912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 08:53:21.557373   13912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:53:21.557896   13912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 08:53:21.557928   13912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 08:53:21.557928   13912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 08:53:21.557928   13912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:53:21.557928   13912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:53:21.557928   13912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:53:21.557928   13912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:53:21.558638   13912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:53:21.558638   13912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:53:21.558638   13912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:53:21.558638   13912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:53:21.562548   13912 out.go:252]   - Booting up control plane ...
	I1213 08:53:21.562548   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:53:21.562548   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:53:21.562548   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:53:21.563157   13912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:53:21.563157   13912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:53:21.563157   13912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:53:21.563157   13912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:53:21.563157   13912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:53:21.563157   13912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:53:21.564125   13912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:53:21.564125   13912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000508197s
	I1213 08:53:21.564125   13912 kubeadm.go:319] 
	I1213 08:53:21.564125   13912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 08:53:21.564125   13912 kubeadm.go:319] 	- The kubelet is not running
	I1213 08:53:21.564125   13912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 08:53:21.564125   13912 kubeadm.go:319] 
	I1213 08:53:21.564125   13912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 08:53:21.564125   13912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 08:53:21.564125   13912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 08:53:21.564125   13912 kubeadm.go:319] 
	W1213 08:53:21.564125   13912 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-482100 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000508197s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 08:53:21.569758   13912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 08:53:22.031046   13912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:53:22.050459   13912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:53:22.054891   13912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:53:22.067479   13912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:53:22.067479   13912 kubeadm.go:158] found existing configuration files:
	
	I1213 08:53:22.071584   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:53:22.085343   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:53:22.090124   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:53:22.108199   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:53:22.121534   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:53:22.126346   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:53:22.146959   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:53:22.163237   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:53:22.168128   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:53:22.187212   13912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:53:22.202775   13912 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:53:22.206771   13912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:53:22.225851   13912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:53:22.361815   13912 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 08:53:22.446830   13912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 08:53:22.547456   13912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:57:23.308344   13912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 08:57:23.308344   13912 kubeadm.go:319] 
	I1213 08:57:23.308878   13912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 08:57:23.319264   13912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:57:23.319264   13912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:57:23.319875   13912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:57:23.319988   13912 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 08:57:23.319988   13912 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 08:57:23.319988   13912 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 08:57:23.319988   13912 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 08:57:23.319988   13912 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 08:57:23.319988   13912 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 08:57:23.320514   13912 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 08:57:23.320584   13912 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 08:57:23.320647   13912 kubeadm.go:319] CONFIG_INET: enabled
	I1213 08:57:23.320773   13912 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 08:57:23.320835   13912 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 08:57:23.320960   13912 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 08:57:23.321092   13912 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 08:57:23.321216   13912 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 08:57:23.321277   13912 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 08:57:23.321402   13912 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 08:57:23.321525   13912 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 08:57:23.321588   13912 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 08:57:23.321712   13912 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 08:57:23.321842   13912 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 08:57:23.321904   13912 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 08:57:23.322090   13912 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 08:57:23.322152   13912 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 08:57:23.322276   13912 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 08:57:23.322337   13912 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 08:57:23.322461   13912 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 08:57:23.322523   13912 kubeadm.go:319] OS: Linux
	I1213 08:57:23.322584   13912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:57:23.322709   13912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:57:23.322771   13912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:57:23.322895   13912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:57:23.322957   13912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:57:23.323020   13912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:57:23.323144   13912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:57:23.323205   13912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:57:23.323328   13912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:57:23.323451   13912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:57:23.323451   13912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:57:23.323451   13912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:57:23.323451   13912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:57:23.329741   13912 out.go:252]   - Generating certificates and keys ...
	I1213 08:57:23.329741   13912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:57:23.330741   13912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:57:23.331736   13912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:57:23.331874   13912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:57:23.331874   13912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:57:23.331874   13912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:57:23.331874   13912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:57:23.331874   13912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:57:23.331874   13912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:57:23.331874   13912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:57:23.339506   13912 out.go:252]   - Booting up control plane ...
	I1213 08:57:23.339506   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:57:23.339506   13912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:57:23.340561   13912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:57:23.340561   13912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:57:23.340561   13912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:57:23.340561   13912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00054602s
	I1213 08:57:23.340561   13912 kubeadm.go:319] 
	I1213 08:57:23.340561   13912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 08:57:23.340561   13912 kubeadm.go:319] 	- The kubelet is not running
	I1213 08:57:23.340561   13912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 08:57:23.340561   13912 kubeadm.go:319] 
	I1213 08:57:23.340561   13912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 08:57:23.341566   13912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 08:57:23.341566   13912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 08:57:23.341566   13912 kubeadm.go:319] 
	I1213 08:57:23.341566   13912 kubeadm.go:403] duration metric: took 8m3.9340067s to StartCluster
	I1213 08:57:23.341566   13912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:23.345562   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:23.681748   13912 cri.go:89] found id: ""
	I1213 08:57:23.681825   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.681825   13912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:23.681825   13912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 08:57:23.685664   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:23.731853   13912 cri.go:89] found id: ""
	I1213 08:57:23.731928   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.731928   13912 logs.go:284] No container was found matching "etcd"
	I1213 08:57:23.731928   13912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 08:57:23.736056   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:23.780107   13912 cri.go:89] found id: ""
	I1213 08:57:23.780107   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.780107   13912 logs.go:284] No container was found matching "coredns"
	I1213 08:57:23.780107   13912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:23.784841   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:23.827904   13912 cri.go:89] found id: ""
	I1213 08:57:23.827904   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.827904   13912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:23.827942   13912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:23.832495   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:23.875787   13912 cri.go:89] found id: ""
	I1213 08:57:23.875787   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.875787   13912 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:23.875787   13912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:23.880297   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:23.927065   13912 cri.go:89] found id: ""
	I1213 08:57:23.927065   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.927065   13912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:23.927065   13912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:23.931104   13912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:23.969614   13912 cri.go:89] found id: ""
	I1213 08:57:23.969614   13912 logs.go:282] 0 containers: []
	W1213 08:57:23.969692   13912 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:23.969692   13912 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:23.969734   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:23.997025   13912 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:23.997025   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:24.264968   13912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:24.254562    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.255507    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.258673    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.259675    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.261100    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:24.254562    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.255507    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.258673    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.259675    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:24.261100    9818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:24.264995   13912 logs.go:123] Gathering logs for Docker ...
	I1213 08:57:24.265019   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 08:57:24.312093   13912 logs.go:123] Gathering logs for container status ...
	I1213 08:57:24.312093   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:24.360272   13912 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:24.360272   13912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 08:57:24.424550   13912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00054602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 08:57:24.424550   13912 out.go:285] * 
	W1213 08:57:24.424550   13912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00054602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 08:57:24.424550   13912 out.go:285] * 
	W1213 08:57:24.426471   13912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:57:24.432723   13912 out.go:203] 
	W1213 08:57:24.435723   13912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00054602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 08:57:24.436215   13912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 08:57:24.436215   13912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 08:57:24.439170   13912 out.go:203] 
	
	
	==> Docker <==
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672137656Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672238463Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672248464Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672253765Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672260665Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672278466Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.672311769Z" level=info msg="Initializing buildkit"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.795070850Z" level=info msg="Completed buildkit initialization"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.803110732Z" level=info msg="Daemon has completed initialization"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.803337048Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 08:49:16 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.803341648Z" level=info msg="API listen on [::]:2376"
	Dec 13 08:49:16 functional-482100 dockerd[1202]: time="2025-12-13T08:49:16.803349949Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 08:49:17 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Loaded network plugin cni"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 08:49:17 functional-482100 cri-dockerd[1493]: time="2025-12-13T08:49:17Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 08:49:17 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:26.318867    9991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:26.322007    9991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:26.323001    9991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:26.323971    9991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:26.325148    9991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000979] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001383] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001464] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001513] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000971] FS:  0000000000000000 GS:  0000000000000000
	[  +6.677188] CPU: 3 PID: 45454 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000852] RIP: 0033:0x7f6468d1ab20
	[  +0.000460] Code: Unable to access opcode bytes at RIP 0x7f6468d1aaf6.
	[  +0.000690] RSP: 002b:00007ffecb328370 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000960] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000835] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000836] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000835] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000836] FS:  0000000000000000 GS:  0000000000000000
	[  +0.796067] CPU: 2 PID: 45568 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000781] RIP: 0033:0x7f1766838b20
	[  +0.000389] Code: Unable to access opcode bytes at RIP 0x7f1766838af6.
	[  +0.000628] RSP: 002b:00007ffc28f795a0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000749] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000739] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000891] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001020] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001158] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001174] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 08:57:26 up 33 min,  0 user,  load average: 0.23, 0.43, 0.72
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:57:22 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:57:23 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 08:57:23 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:23 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:23 functional-482100 kubelet[9725]: E1213 08:57:23.568473    9725 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:57:23 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:57:23 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:57:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 08:57:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:24 functional-482100 kubelet[9827]: E1213 08:57:24.322219    9827 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:57:24 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:57:24 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:57:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 08:57:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:25 functional-482100 kubelet[9864]: E1213 08:57:25.076153    9864 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:57:25 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:57:25 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:57:25 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 08:57:25 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:25 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:57:25 functional-482100 kubelet[9890]: E1213 08:57:25.819750    9890 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:57:25 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:57:25 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 6 (594.7941ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 08:57:27.262860   11072 status.go:458] kubeconfig endpoint: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (518.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 08:57:27.311277    2968 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --alsologtostderr -v=8
E1213 08:57:36.661876    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:58:04.374213    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:59:46.003582    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:02:36.665090    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-482100 --alsologtostderr -v=8: exit status 80 (6m10.0577482s)

                                                
                                                
-- stdout --
	* [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:57:27.379293    1308 out.go:360] Setting OutFile to fd 1960 ...
	I1213 08:57:27.421775    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.421775    1308 out.go:374] Setting ErrFile to fd 2020...
	I1213 08:57:27.421858    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.434678    1308 out.go:368] Setting JSON to false
	I1213 08:57:27.436793    1308 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2054,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:57:27.436793    1308 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:57:27.440227    1308 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:57:27.444177    1308 notify.go:221] Checking for updates...
	I1213 08:57:27.444177    1308 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:27.446958    1308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:57:27.448893    1308 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:57:27.451179    1308 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:57:27.453000    1308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:57:27.455340    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:27.456010    1308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:57:27.677552    1308 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:57:27.681550    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:27.918123    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:27.897746454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:27.922386    1308 out.go:179] * Using the docker driver based on existing profile
	I1213 08:57:27.925483    1308 start.go:309] selected driver: docker
	I1213 08:57:27.925483    1308 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:27.925483    1308 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:57:27.931484    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:28.158174    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:28.141185883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:28.238865    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:28.238865    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:28.239498    1308 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:28.243527    1308 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:57:28.245818    1308 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:57:28.247303    1308 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:57:28.251374    1308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:57:28.251465    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:28.251634    1308 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:57:28.251673    1308 cache.go:65] Caching tarball of preloaded images
	I1213 08:57:28.251673    1308 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:57:28.251673    1308 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:57:28.251673    1308 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:57:28.331506    1308 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:57:28.331506    1308 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:57:28.331506    1308 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:57:28.331506    1308 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:57:28.331506    1308 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-482100"
	I1213 08:57:28.331506    1308 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:57:28.331506    1308 fix.go:54] fixHost starting: 
	I1213 08:57:28.338850    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:28.394405    1308 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 08:57:28.394453    1308 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:57:28.397828    1308 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 08:57:28.397828    1308 machine.go:94] provisionDockerMachine start ...
	I1213 08:57:28.401414    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.456355    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.457085    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.457134    1308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:57:28.656820    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.656820    1308 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:57:28.660505    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.713653    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.714127    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.714127    1308 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:57:28.912851    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.916558    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.972916    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.973035    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.973035    1308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:57:29.158720    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:29.158720    1308 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:57:29.158720    1308 ubuntu.go:190] setting up certificates
	I1213 08:57:29.158720    1308 provision.go:84] configureAuth start
	I1213 08:57:29.162705    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:29.217525    1308 provision.go:143] copyHostCerts
	I1213 08:57:29.217525    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1213 08:57:29.217525    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:57:29.217525    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:57:29.218193    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:57:29.218931    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1213 08:57:29.219078    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:57:29.219114    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:57:29.219299    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:57:29.220064    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:57:29.220064    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:57:29.220972    1308 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:57:29.312824    1308 provision.go:177] copyRemoteCerts
	I1213 08:57:29.317163    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:57:29.320164    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.370164    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:29.504512    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1213 08:57:29.504655    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:57:29.542721    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1213 08:57:29.542721    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:57:29.574672    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1213 08:57:29.574672    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:57:29.604045    1308 provision.go:87] duration metric: took 445.3221ms to configureAuth
	I1213 08:57:29.604045    1308 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:57:29.605053    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:29.610417    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.666069    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.666532    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.666532    1308 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:57:29.836610    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:57:29.836610    1308 ubuntu.go:71] root file system type: overlay
	I1213 08:57:29.836610    1308 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:57:29.840760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.894590    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.895592    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.895592    1308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:57:30.101134    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:57:30.105760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.161736    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:30.162318    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:30.162318    1308 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:57:30.345094    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:30.345094    1308 machine.go:97] duration metric: took 1.947253s to provisionDockerMachine
	I1213 08:57:30.345094    1308 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:57:30.345094    1308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:57:30.349348    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:57:30.352292    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.407399    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.537367    1308 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:57:30.545885    1308 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_ID="12"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:57:30.545957    1308 command_runner.go:130] > ID=debian
	I1213 08:57:30.545957    1308 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:57:30.545957    1308 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:57:30.545957    1308 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:57:30.546095    1308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:57:30.546117    1308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:57:30.546141    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:57:30.546161    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:57:30.546880    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:57:30.546880    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I1213 08:57:30.547539    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:57:30.547539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I1213 08:57:30.551732    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:57:30.565806    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:57:30.596092    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:57:30.624821    1308 start.go:296] duration metric: took 279.7253ms for postStartSetup
	I1213 08:57:30.629883    1308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:57:30.633087    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.686590    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.807695    1308 command_runner.go:130] > 1%
	I1213 08:57:30.812335    1308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:57:30.820851    1308 command_runner.go:130] > 950G
	I1213 08:57:30.820851    1308 fix.go:56] duration metric: took 2.4893282s for fixHost
	I1213 08:57:30.820851    1308 start.go:83] releasing machines lock for "functional-482100", held for 2.4893282s
	I1213 08:57:30.824237    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:30.876765    1308 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:57:30.881324    1308 ssh_runner.go:195] Run: cat /version.json
	I1213 08:57:30.881371    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.884518    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:31.066730    1308 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1213 08:57:31.066730    1308 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:57:31.066730    1308 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:57:31.071708    1308 ssh_runner.go:195] Run: systemctl --version
	I1213 08:57:31.084553    1308 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:57:31.084640    1308 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:57:31.090087    1308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:57:31.099561    1308 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:57:31.100565    1308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:57:31.105214    1308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:57:31.124077    1308 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:57:31.124077    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.124077    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.124648    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:31.147852    1308 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:57:31.152021    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:57:31.174172    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1213 08:57:31.176576    1308 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:57:31.176576    1308 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:57:31.189695    1308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:57:31.194128    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:57:31.213650    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.232544    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:57:31.252203    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.274175    1308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:57:31.296706    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:57:31.315777    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:57:31.334664    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:57:31.355488    1308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:57:31.369376    1308 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:57:31.373398    1308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:57:31.391830    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:31.608372    1308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:57:31.906123    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.906123    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.911089    1308 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:57:31.932611    1308 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > [Unit]
	I1213 08:57:31.933145    1308 command_runner.go:130] > Description=Docker Application Container Engine
	I1213 08:57:31.933145    1308 command_runner.go:130] > Documentation=https://docs.docker.com
	I1213 08:57:31.933145    1308 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1213 08:57:31.933145    1308 command_runner.go:130] > Wants=network-online.target containerd.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > Requires=docker.socket
	I1213 08:57:31.933145    1308 command_runner.go:130] > StartLimitBurst=3
	I1213 08:57:31.933239    1308 command_runner.go:130] > StartLimitIntervalSec=60
	I1213 08:57:31.933239    1308 command_runner.go:130] > [Service]
	I1213 08:57:31.933239    1308 command_runner.go:130] > Type=notify
	I1213 08:57:31.933239    1308 command_runner.go:130] > Restart=always
	I1213 08:57:31.933239    1308 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1213 08:57:31.933239    1308 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1213 08:57:31.933303    1308 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1213 08:57:31.933336    1308 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1213 08:57:31.933336    1308 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1213 08:57:31.933336    1308 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1213 08:57:31.933336    1308 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1213 08:57:31.933336    1308 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1213 08:57:31.933336    1308 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1213 08:57:31.933415    1308 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1213 08:57:31.933498    1308 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNOFILE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNPROC=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitCORE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1213 08:57:31.933498    1308 command_runner.go:130] > TasksMax=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > TimeoutStartSec=0
	I1213 08:57:31.933572    1308 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1213 08:57:31.933591    1308 command_runner.go:130] > Delegate=yes
	I1213 08:57:31.933591    1308 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1213 08:57:31.933591    1308 command_runner.go:130] > KillMode=process
	I1213 08:57:31.933591    1308 command_runner.go:130] > OOMScoreAdjust=-500
	I1213 08:57:31.933591    1308 command_runner.go:130] > [Install]
	I1213 08:57:31.933591    1308 command_runner.go:130] > WantedBy=multi-user.target
	I1213 08:57:31.938295    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:31.960377    1308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:57:32.049121    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:32.071680    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:57:32.093496    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:32.115103    1308 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1213 08:57:32.119951    1308 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:57:32.126371    1308 command_runner.go:130] > /usr/bin/cri-dockerd
	I1213 08:57:32.130902    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:57:32.144169    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:57:32.170348    1308 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:57:32.320163    1308 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:57:32.454851    1308 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:57:32.454851    1308 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:57:32.483674    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:57:32.505831    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:32.661991    1308 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:57:33.665330    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:57:33.689450    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:57:33.711087    1308 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 08:57:33.739462    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:33.760714    1308 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:57:33.900242    1308 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:57:34.052335    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.188283    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:57:34.213402    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:57:34.237672    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.381154    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:57:34.499581    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:34.518141    1308 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:57:34.522686    1308 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:57:34.529494    1308 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Modify: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Change: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] >  Birth: -
	I1213 08:57:34.529494    1308 start.go:564] Will wait 60s for crictl version
	I1213 08:57:34.534224    1308 ssh_runner.go:195] Run: which crictl
	I1213 08:57:34.541202    1308 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:57:34.545269    1308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:57:34.587655    1308 command_runner.go:130] > Version:  0.1.0
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeName:  docker
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:57:34.587655    1308 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:57:34.590292    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.627699    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.631112    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.669555    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.677969    1308 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:57:34.681392    1308 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:57:34.898094    1308 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:57:34.902419    1308 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:57:34.910595    1308 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1213 08:57:34.914565    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:34.972832    1308 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:57:34.972832    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:34.977045    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.008739    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.010249    1308 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:57:35.013678    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.043903    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.044022    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.044104    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.044104    1308 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:57:35.044160    1308 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:57:35.044312    1308 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:57:35.047625    1308 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:57:35.491294    1308 command_runner.go:130] > cgroupfs
	I1213 08:57:35.491294    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:35.491294    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:35.491294    1308 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:57:35.491294    1308 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:57:35.491294    1308 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:57:35.495479    1308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubeadm
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubectl
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubelet
	I1213 08:57:35.511680    1308 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:57:35.515943    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:57:35.527808    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:57:35.545969    1308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:57:35.565749    1308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:57:35.590269    1308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:57:35.598806    1308 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:57:35.603098    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:35.752426    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:35.771354    1308 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:57:35.771354    1308 certs.go:195] generating shared ca certs ...
	I1213 08:57:35.771354    1308 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:35.771354    1308 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:57:35.772397    1308 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:57:35.772549    1308 certs.go:257] generating profile certs ...
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:57:35.773396    1308 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:57:35.773447    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:57:35.773539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:57:35.773616    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:57:35.773761    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:57:35.773831    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:57:35.773939    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:57:35.773999    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:57:35.774105    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:57:35.774559    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:57:35.774827    1308 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:57:35.774870    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:57:35.775696    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I1213 08:57:35.775842    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:57:35.807179    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:57:35.833688    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:57:35.863566    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:57:35.894920    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:57:35.921314    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:57:35.946004    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:57:35.973030    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:57:36.001405    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:57:36.027495    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:57:36.053673    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:57:36.083163    1308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:57:36.106205    1308 ssh_runner.go:195] Run: openssl version
	I1213 08:57:36.124518    1308 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:57:36.128653    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.148109    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:57:36.170644    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.184506    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.230303    1308 command_runner.go:130] > 51391683
	I1213 08:57:36.235418    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:57:36.252420    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.271009    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:57:36.291738    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.306035    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.348842    1308 command_runner.go:130] > 3ec20f2e
	I1213 08:57:36.353574    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:57:36.371994    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.390417    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:57:36.409132    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.417987    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.418020    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.422336    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.464222    1308 command_runner.go:130] > b5213941
	I1213 08:57:36.469763    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:57:36.486907    1308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:57:36.493430    1308 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: 2025-12-13 08:53:22.558756963 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Modify: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Change: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] >  Birth: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.498322    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:57:36.542775    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.547618    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:57:36.590488    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.594826    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:57:36.640226    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.644848    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:57:36.698932    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.703709    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:57:36.746225    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.751252    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:57:36.796246    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.796605    1308 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:36.800619    1308 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:57:36.835511    1308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:57:36.848084    1308 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:57:36.848084    1308 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:57:36.853050    1308 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:57:36.866011    1308 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:57:36.869675    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:36.923417    1308 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.923684    1308 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-482100" cluster setting kubeconfig missing "functional-482100" context setting]
	I1213 08:57:36.923684    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.940090    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.940688    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:36.941864    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:57:36.946352    1308 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:57:36.960987    1308 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 08:57:36.961998    1308 kubeadm.go:602] duration metric: took 113.913ms to restartPrimaryControlPlane
	I1213 08:57:36.961998    1308 kubeadm.go:403] duration metric: took 165.4668ms to StartCluster
	I1213 08:57:36.961998    1308 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.961998    1308 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.963076    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.963883    1308 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:57:36.963883    1308 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:57:36.963883    1308 addons.go:70] Setting default-storageclass=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 addons.go:70] Setting storage-provisioner=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:36.963883    1308 addons.go:239] Setting addon storage-provisioner=true in "functional-482100"
	I1213 08:57:36.963883    1308 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-482100"
	I1213 08:57:36.964406    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:36.966968    1308 out.go:179] * Verifying Kubernetes components...
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.974067    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:37.028122    1308 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:37.032121    1308 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.032121    1308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:57:37.035128    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.050133    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:37.050133    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:37.051141    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:37.051141    1308 addons.go:239] Setting addon default-storageclass=true in "functional-482100"
	I1213 08:57:37.051141    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:37.059130    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:37.090124    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.112122    1308 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.112122    1308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:57:37.115122    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.124126    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:37.163123    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.218965    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.244846    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.292847    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.297857    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.298846    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 retry.go:31] will retry after 278.997974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 node_ready.go:35] waiting up to 6m0s for node "functional-482100" to be "Ready" ...
	I1213 08:57:37.298846    1308 type.go:168] "Request Body" body=""
	I1213 08:57:37.298846    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:37.300855    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:37.389624    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.394960    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.394960    1308 retry.go:31] will retry after 212.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.583432    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.612508    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.662694    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.668089    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.668089    1308 retry.go:31] will retry after 421.785382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.691227    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.696684    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.696684    1308 retry.go:31] will retry after 387.963958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.090409    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.094708    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.167644    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.172931    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.172931    1308 retry.go:31] will retry after 654.783355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.174195    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.178117    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.178179    1308 retry.go:31] will retry after 288.314182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.301152    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:38.301683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:38.304388    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:38.472962    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.544996    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.548547    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.548623    1308 retry.go:31] will retry after 1.098701937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.833272    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.912142    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.912142    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.912142    1308 retry.go:31] will retry after 808.399476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.305249    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:39.305249    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:39.308473    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:39.652260    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:39.721531    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726229    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 08:57:39.726899    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726899    1308 retry.go:31] will retry after 1.580407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.799856    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:39.802238    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.802238    1308 retry.go:31] will retry after 1.163449845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:40.308791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:40.308791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:40.310792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:40.971107    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:41.051235    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.056481    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.056595    1308 retry.go:31] will retry after 2.292483012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.312219    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:41.312219    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:41.313763    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:41.315446    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:41.385280    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.389328    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.389328    1308 retry.go:31] will retry after 2.10655749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:42.316064    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:42.316469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:42.319430    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.319659    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:43.319659    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:43.322154    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.354119    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:43.424936    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.428566    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.428566    1308 retry.go:31] will retry after 2.451441131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.500768    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:43.577861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.581800    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.581870    1308 retry.go:31] will retry after 1.842575818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:44.322393    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:44.322393    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:44.326064    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.326352    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:45.326352    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:45.329823    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.430441    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:45.504084    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.509721    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.509813    1308 retry.go:31] will retry after 3.320490506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.885819    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:45.962560    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.966882    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.966882    1308 retry.go:31] will retry after 5.131341184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:46.330362    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:46.330362    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:46.333170    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:47.333778    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:47.333778    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.337260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:57:47.337260    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:47.337260    1308 type.go:168] "Request Body" body=""
	I1213 08:57:47.337260    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.340404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.340937    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:48.340937    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:48.344443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.835623    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:48.914169    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:48.918486    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:48.918486    1308 retry.go:31] will retry after 6.605490232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:49.345162    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:49.345162    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:49.347526    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:50.348478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:50.348478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:50.351813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:51.103982    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:51.174396    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:51.177073    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.177136    1308 retry.go:31] will retry after 4.217545245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.352019    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:51.352363    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:51.354826    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:52.355908    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:52.355908    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:52.358993    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:53.359347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:53.359730    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:53.362425    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:54.363245    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:54.363536    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:54.366267    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:55.367715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:55.367715    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:55.371143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:55.400351    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:55.476385    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.480063    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.480122    1308 retry.go:31] will retry after 11.422205159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.528824    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:55.599872    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.604580    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.604626    1308 retry.go:31] will retry after 13.338795854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:56.371517    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:56.371517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:56.375228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:57.375899    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:57.375899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.378899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:57:57.379427    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:57.379613    1308 type.go:168] "Request Body" body=""
	I1213 08:57:57.379640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.381380    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 08:57:58.382025    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:58.382025    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:58.385451    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:59.385982    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:59.386304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:59.388570    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:00.389156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:00.389156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:00.393493    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:01.394059    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:01.394059    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:01.397148    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:02.397228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:02.397593    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:02.400363    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:03.400715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:03.401100    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:03.403595    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:04.404146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:04.404146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:04.407029    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:05.407299    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:05.407299    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:05.409705    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:06.410552    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:06.410552    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:06.413575    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:06.907694    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:06.989453    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:06.993505    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:06.993505    1308 retry.go:31] will retry after 9.12046724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:07.413861    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:07.413861    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.423766    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	W1213 08:58:07.423766    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:07.423766    1308 type.go:168] "Request Body" body=""
	I1213 08:58:07.423766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.426420    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.426748    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:08.426748    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:08.429523    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.949269    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:09.021443    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:09.021574    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.021574    1308 retry.go:31] will retry after 18.212645226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.429654    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:09.429654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:09.434475    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:10.434763    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:10.434763    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:10.438337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:11.438992    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:11.438992    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:11.442157    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:12.442370    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:12.442370    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:12.445441    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:13.446557    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:13.446557    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:13.449579    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:14.449909    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:14.449909    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:14.453875    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:15.453999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:15.454347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:15.457109    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:16.119722    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:16.199861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:16.203796    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.203841    1308 retry.go:31] will retry after 32.127892546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.457492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:16.457492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:16.460671    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:17.461098    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:17.461098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.464303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:17.464392    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:17.464557    1308 type.go:168] "Request Body" body=""
	I1213 08:58:17.464596    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.466792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:18.467178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:18.467178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:18.471411    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:19.472813    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:19.472813    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:19.475365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:20.475825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:20.475825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:20.478756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:21.479284    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:21.479284    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:21.482725    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:22.483047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:22.483047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:22.486928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:23.487680    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:23.487680    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:23.491133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:24.491850    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:24.492121    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:24.495131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:25.495436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:25.495893    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:25.498242    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:26.498882    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:26.498882    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:26.501986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:27.239685    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:27.315134    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:27.318446    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.318446    1308 retry.go:31] will retry after 22.292291086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.502907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:27.502907    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.505700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:27.505700    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:27.505700    1308 type.go:168] "Request Body" body=""
	I1213 08:58:27.505700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.508521    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:28.509510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:28.509510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:28.512707    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:29.513169    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:29.513169    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:29.516081    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:30.517601    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:30.517601    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:30.520368    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:31.520700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:31.521119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:31.524120    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:32.524848    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:32.524848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:32.528137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:33.529023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:33.529412    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:33.532996    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:34.533392    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:34.533697    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:34.536406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:35.536910    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:35.536910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:35.539801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:36.540290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:36.540290    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:36.543462    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:37.544092    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:37.544398    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.547080    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:37.547165    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:37.547240    1308 type.go:168] "Request Body" body=""
	I1213 08:58:37.547322    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.549686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:38.550568    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:38.550568    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:38.554061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:39.554545    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:39.554545    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:39.556910    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:40.557343    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:40.557343    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:40.562456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 08:58:41.563271    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:41.563271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:41.566401    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:42.566676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:42.566676    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:42.569495    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:43.570436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:43.570436    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:43.573856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:44.574034    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:44.574034    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:44.576971    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:45.577736    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:45.577736    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:45.580563    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:46.580998    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:46.580998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:46.584404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:47.585574    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:47.585574    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.589116    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:47.589116    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:47.589285    1308 type.go:168] "Request Body" body=""
	I1213 08:58:47.589330    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.591421    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:48.337063    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:48.419155    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:48.419236    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.419312    1308 retry.go:31] will retry after 42.344315794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.592137    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:48.592503    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:48.594564    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.594849    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:49.594849    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:49.598177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.616306    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:49.690748    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:49.696226    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:49.696226    1308 retry.go:31] will retry after 43.889805704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:50.598940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:50.598940    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:50.602650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:51.602781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:51.602781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:51.606654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:52.607136    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:52.607136    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:52.610410    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:53.610695    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:53.611291    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:53.614086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:54.614262    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:54.614262    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:54.617596    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:55.618389    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:55.618389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:55.621130    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:56.621484    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:56.621936    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:56.626456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:57.626653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:57.626653    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.630131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:57.630131    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:57.630323    1308 type.go:168] "Request Body" body=""
	I1213 08:58:57.630411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.632861    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:58.633441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:58.634089    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:58.637246    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:59.637793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:59.638147    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:59.641409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:00.641531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:00.641871    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:00.644335    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:01.644762    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:01.644762    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:01.647872    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:02.648069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:02.648069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:02.651180    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:03.651302    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:03.651302    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:03.654332    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:04.654665    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:04.654665    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:04.657952    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:05.658178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:05.658178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:05.662672    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:06.663347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:06.663347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:06.666728    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:07.667532    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:07.667885    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.670688    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:07.670852    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:07.670996    1308 type.go:168] "Request Body" body=""
	I1213 08:59:07.671070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.675143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:08.675540    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:08.675540    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:08.679392    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:09.679704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:09.679704    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:09.683514    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:10.683721    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:10.683721    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:10.686924    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:11.687492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:11.687492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:11.691432    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:12.692349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:12.692349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:12.695226    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:13.696218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:13.696218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:13.699830    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:14.700112    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:14.700547    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:14.704305    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:15.704907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:15.705360    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:15.708341    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:16.709464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:16.709464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:16.712813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:17.713633    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:17.713633    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.716674    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:17.716674    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:17.717197    1308 type.go:168] "Request Body" body=""
	I1213 08:59:17.717271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.719337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:18.719797    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:18.720188    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:18.722856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:19.723497    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:19.723497    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:19.726804    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:20.727069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:20.727069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:20.730372    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:21.730641    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:21.731118    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:21.733699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:22.734052    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:22.734386    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:22.736600    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:23.737452    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:23.737452    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:23.741063    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:24.741313    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:24.741698    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:24.744012    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:25.745187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:25.745474    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:25.747751    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:26.748382    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:26.748382    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:26.751104    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:27.752196    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:27.752196    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.755077    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:27.755077    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:27.755077    1308 type.go:168] "Request Body" body=""
	I1213 08:59:27.755077    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.757683    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:28.757960    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:28.757960    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:28.760904    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:29.761133    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:29.761133    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:29.763899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:30.764845    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:30.764845    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:30.768756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:30.770278    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:59:31.058703    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:31.769017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:31.769017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:31.771943    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:32.772681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:32.772681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:32.775498    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:33.593527    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:59:33.670412    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:33.680151    1308 out.go:179] * Enabled addons: 
	I1213 08:59:33.683381    1308 addons.go:530] duration metric: took 1m56.7187029s for enable addons: enabled=[]
	I1213 08:59:33.775980    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:33.775980    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:33.778203    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:34.778690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:34.778690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:34.781770    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:35.781942    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:35.781942    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:35.784945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:36.785401    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:36.785401    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:36.788220    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:37.789140    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:37.789517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.792181    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:37.792181    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:37.792181    1308 type.go:168] "Request Body" body=""
	I1213 08:59:37.792181    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.795747    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:38.796718    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:38.797057    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:38.799771    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:39.801087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:39.801087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:39.804280    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:40.804974    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:40.804974    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:40.809756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:41.810478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:41.810478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:41.813284    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:42.814032    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:42.814032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:42.816801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:43.817275    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:43.817275    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:43.820177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:44.821316    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:44.821679    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:44.824610    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:45.825090    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:45.825090    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:45.828480    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:46.828785    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:46.828785    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:46.832994    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:47.833561    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:47.833561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.837174    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:47.837252    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:47.837252    1308 type.go:168] "Request Body" body=""
	I1213 08:59:47.837252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.840387    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:48.841463    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:48.841463    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:48.844528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:49.844825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:49.844825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:49.847708    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:50.848539    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:50.848539    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:50.851697    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:51.852049    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:51.852049    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:51.855078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:52.855723    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:52.855723    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:52.859507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:53.860458    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:53.860752    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:53.863700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:54.864320    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:54.864320    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:54.867415    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:55.868074    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:55.868074    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:55.871228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:56.871814    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:56.871814    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:56.874839    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:57.875317    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:57.875317    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.878442    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:57.878519    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:57.878660    1308 type.go:168] "Request Body" body=""
	I1213 08:59:57.878732    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.881312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:58.881649    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:58.881649    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:58.885078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:59.885307    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:59.885744    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:59.889776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:00.889947    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:00.889947    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:00.893124    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:01.893793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:01.893793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:01.897038    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:02.898350    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:02.898350    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:02.901920    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:03.902381    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:03.902770    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:03.905459    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:04.905970    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:04.905970    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:04.909229    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:05.909755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:05.909755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:05.912940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:06.913579    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:06.913910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:06.916237    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:07.917204    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:07.917204    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.920022    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:07.920154    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:07.920228    1308 type.go:168] "Request Body" body=""
	I1213 09:00:07.920228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.922914    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:08.923390    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:08.923390    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:08.927291    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:09.927446    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:09.927446    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:09.930102    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:10.931065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:10.931065    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:10.933557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:11.934252    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:11.934252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:11.937814    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:12.938281    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:12.938281    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:12.941107    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:13.941334    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:13.941334    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:13.944744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:14.945212    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:14.945453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:14.948167    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:15.948418    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:15.948418    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:15.951926    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:16.952429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:16.952429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:16.955133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:17.955919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:17.956313    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.960472    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:00:17.960625    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:17.960654    1308 type.go:168] "Request Body" body=""
	I1213 09:00:17.960654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.962719    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:18.964051    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:18.964051    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:18.966984    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:19.967156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:19.967156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:19.970136    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:20.970477    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:20.970477    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:20.973611    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:21.974523    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:21.974905    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:21.977137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:22.977950    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:22.978189    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:22.981086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:23.982033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:23.982033    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:23.984606    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:24.985513    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:24.985513    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:24.988324    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:25.988590    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:25.988590    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:25.991603    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:26.992676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:26.992930    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:26.994776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:27.995464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:27.995464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:27.998145    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:27.998665    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:27.998851    1308 type.go:168] "Request Body" body=""
	I1213 09:00:27.998921    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:28.001309    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:29.001457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:29.001457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:29.004871    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:30.005255    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:30.005617    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:30.008184    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:31.008410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:31.008410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:31.011873    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:32.012490    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:32.012848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:32.015810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:33.016170    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:33.016170    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:33.017538    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:34.019586    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:34.019586    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:34.022235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:35.022955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:35.022955    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:35.026485    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:36.027689    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:36.027689    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:36.030650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:37.031027    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:37.031027    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:37.034168    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:38.034637    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:38.034637    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.038073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:38.038187    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:38.038337    1308 type.go:168] "Request Body" body=""
	I1213 09:00:38.038396    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.040656    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:39.041187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:39.041187    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:39.044199    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:40.044653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:40.045022    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:40.048245    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:41.048510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:41.048510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:41.052268    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:42.053226    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:42.053226    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:42.056222    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:43.056546    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:43.056546    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:43.059398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:44.059625    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:44.059625    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:44.062923    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:45.063384    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:45.063384    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:45.066631    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:46.067306    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:46.067306    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:46.070443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:47.070777    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:47.070777    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:47.073795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:48.074558    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:48.074558    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.077853    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:48.077917    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:48.078016    1308 type.go:168] "Request Body" body=""
	I1213 09:00:48.078098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.080934    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:49.082070    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:49.082070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:49.084982    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:50.085640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:50.085640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:50.088925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:51.089700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:51.089700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:51.092744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:52.093791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:52.093791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:52.096573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:53.097781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:53.097781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:53.100957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:54.101759    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:54.101759    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:54.104615    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:55.105494    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:55.105919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:55.109444    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:56.110146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:56.110146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:56.114930    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:00:57.115147    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:57.115467    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:57.118438    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:58.119483    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:58.119483    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.122648    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:58.122648    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:58.122648    1308 type.go:168] "Request Body" body=""
	I1213 09:00:58.123185    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.125195    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:59.125875    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:59.125875    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:59.129393    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:00.129668    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:00.129668    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:00.132627    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:01.133033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:01.133525    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:01.136658    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:02.137163    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:02.137163    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:02.140403    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:03.140588    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:03.140588    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:03.143578    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:04.144312    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:04.144312    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:04.147391    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:05.148065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:05.148453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:05.152235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:06.152555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:06.152555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:06.155862    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:07.156337    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:07.156337    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:07.159561    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:08.160007    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:08.160007    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.163399    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:01:08.163399    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:08.163399    1308 type.go:168] "Request Body" body=""
	I1213 09:01:08.163399    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.165301    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:09.166036    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:09.166036    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:09.169312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:10.170153    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:10.170153    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:10.173337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:11.173766    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:11.173766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:11.176583    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:12.177289    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:12.177289    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:12.180992    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:13.181441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:13.181441    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:13.183966    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:14.185028    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:14.185028    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:14.189060    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:15.189819    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:15.190274    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:15.193013    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:16.193531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:16.193531    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:16.196639    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:17.197877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:17.197877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:17.201511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:18.201776    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:18.201776    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.204748    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:18.204825    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:18.204913    1308 type.go:168] "Request Body" body=""
	I1213 09:01:18.204983    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.206713    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:19.207179    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:19.207179    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:19.210389    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:20.210678    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:20.210678    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:20.213343    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:21.213955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:21.214383    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:21.217244    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:22.217764    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:22.217764    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:22.221016    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:23.221538    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:23.222082    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:23.225141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:24.225563    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:24.225563    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:24.228842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:25.229501    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:25.229896    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:25.232481    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:26.232855    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:26.232855    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:26.235225    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:27.235999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:27.235999    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:27.239007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:28.239290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:28.239796    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.242163    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:28.242163    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:28.242754    1308 type.go:168] "Request Body" body=""
	I1213 09:01:28.242754    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.245406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:29.246227    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:29.246227    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:29.249049    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:30.249528    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:30.249528    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:30.252945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:31.253720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:31.253720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:31.257007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:32.257727    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:32.257727    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:32.260807    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:33.261355    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:33.261355    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:33.264412    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:34.265479    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:34.265479    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:34.268382    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:35.269039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:35.269258    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:35.271838    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:36.272075    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:36.272075    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:36.275197    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:37.275934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:37.275934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:37.280528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:38.281387    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:38.281707    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.284450    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:38.284566    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:38.284566    1308 type.go:168] "Request Body" body=""
	I1213 09:01:38.284566    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.287277    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:39.287457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:39.287457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:39.290889    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:40.291630    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:40.291630    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:40.295337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:41.295926    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:41.296353    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:41.299053    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:42.300178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:42.300178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:42.303160    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:43.304403    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:43.305041    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:43.309194    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:44.310087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:44.310087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:44.312799    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:45.313738    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:45.313738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:45.317911    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:46.319411    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:46.319411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:46.323036    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:47.323495    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:47.323495    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:47.326782    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:48.327222    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:48.327222    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.331951    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:01:48.331951    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:48.331951    1308 type.go:168] "Request Body" body=""
	I1213 09:01:48.331951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.336553    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:49.337686    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:49.337686    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:49.340983    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:50.342115    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:50.342115    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:50.344717    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:51.345242    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:51.345242    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:51.347895    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:52.348829    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:52.348829    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:52.353265    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:53.353621    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:53.353621    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:53.356851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:54.357643    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:54.357643    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:54.360716    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:55.361583    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:55.361583    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:55.364202    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:56.364951    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:56.364951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:56.368507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:57.368791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:57.368791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:57.373234    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:58.373801    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:58.373801    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.376426    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:58.376426    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:58.376426    1308 type.go:168] "Request Body" body=""
	I1213 09:01:58.377111    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.379740    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:59.379930    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:59.380415    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:59.383047    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:00.384221    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:00.384221    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:00.387516    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:01.388029    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:01.388029    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:01.392383    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:02.392602    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:02.392956    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:02.396482    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:03.397017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:03.397017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:03.400427    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:04.400756    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:04.400756    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:04.404303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:05.404720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:05.404720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:05.408936    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:06.409154    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:06.409154    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:06.412227    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:07.412599    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:07.412599    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:07.415247    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:08.415920    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:08.415920    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.419260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:08.419342    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:08.419400    1308 type.go:168] "Request Body" body=""
	I1213 09:02:08.419400    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.421925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:09.422119    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:09.422119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:09.424626    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:10.426518    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:10.426518    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:10.430645    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:11.431039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:11.431039    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:11.434110    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:12.434291    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:12.434618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:12.437021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:13.437858    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:13.437858    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:13.440822    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:14.441345    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:14.441345    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:14.444544    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:15.444691    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:15.444691    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:15.447957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:16.448990    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:16.448990    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:16.452282    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:17.452755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:17.452755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:17.456404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:18.456603    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:18.456603    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.459851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:18.459890    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:18.460020    1308 type.go:168] "Request Body" body=""
	I1213 09:02:18.460056    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.462654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:19.463750    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:19.463750    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:19.468416    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:20.469429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:20.469429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:20.472907    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:21.473388    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:21.473388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:21.476318    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:22.477206    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:22.477206    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:22.480424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:23.481047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:23.481047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:23.484298    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:24.484704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:24.485032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:24.487973    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:25.488079    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:25.488079    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:25.490531    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:26.491438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:26.491657    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:26.493750    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:27.494247    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:27.494247    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:27.497209    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:28.497919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:28.497919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.500562    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:28.500562    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:28.500562    1308 type.go:168] "Request Body" body=""
	I1213 09:02:28.500562    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.503165    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:29.504103    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:29.504388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:29.507059    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:30.507476    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:30.507476    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:30.510269    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:31.510555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:31.510555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:31.513340    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:32.513618    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:32.513618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:32.517141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:33.518021    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:33.518021    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:33.520581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:34.521438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:34.521438    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:34.524010    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:35.524429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:35.524429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:35.527863    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:36.528126    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:36.528517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:36.531666    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:37.532410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:37.532410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:37.534749    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:38.535244    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:38.535670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.538511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:38.538511    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:38.538511    1308 type.go:168] "Request Body" body=""
	I1213 09:02:38.538511    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.541809    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:39.542847    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:39.542998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:39.545686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:40.546120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:40.546120    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:40.548869    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:41.549367    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:41.549743    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:41.551917    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:42.552928    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:42.552928    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:42.555928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:43.556215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:43.556215    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:43.563039    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1213 09:02:44.563600    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:44.563600    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:44.566791    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:45.568004    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:45.568004    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:45.570729    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:46.570877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:46.570877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:46.573554    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:47.574011    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:47.574011    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:47.576970    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:48.577457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:48.577800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.582090    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:02:48.582143    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:48.582262    1308 type.go:168] "Request Body" body=""
	I1213 09:02:48.582389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.586235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:49.586627    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:49.586627    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:49.589468    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:50.589681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:50.589681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:50.592409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:51.593243    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:51.593243    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:51.596241    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:52.596400    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:52.596738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:52.599767    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:53.600526    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:53.600526    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:53.603709    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:54.604023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:54.604023    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:54.607315    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:55.607800    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:55.607800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:55.609797    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:56.610964    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:56.610964    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:56.613665    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:57.615191    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:57.615191    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:57.618842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:58.619640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:58.619640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.622361    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:58.622361    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:58.622361    1308 type.go:168] "Request Body" body=""
	I1213 09:02:58.622899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.625164    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:59.625440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:59.625440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:59.628095    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:00.628841    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:00.628841    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:00.632573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:01.632870    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:01.632870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:01.636028    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:02.636954    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:02.636954    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:02.640488    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:03.640838    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:03.640838    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:03.643811    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:04.644591    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:04.644591    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:04.647706    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:05.648327    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:05.648327    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:05.651557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:06.651787    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:06.651787    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:06.655775    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:07.656509    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:07.656509    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:07.659073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:08.659268    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:08.659268    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.662810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:03:08.662901    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:08.663011    1308 type.go:168] "Request Body" body=""
	I1213 09:03:08.663085    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.664787    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:03:09.665751    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:09.665872    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:09.668793    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:10.669211    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:10.669211    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:10.671961    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:11.672215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:11.672561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:11.675173    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:12.675670    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:12.675670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:12.679515    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:13.679795    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:13.679795    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:13.682609    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:14.682918    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:14.682918    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:14.685424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:15.686129    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:15.686129    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:15.690757    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:03:16.690957    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:16.690957    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:16.693958    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:17.694690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:17.694690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:17.697021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:18.697793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:18.697793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.700581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:18.701115    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:18.701215    1308 type.go:168] "Request Body" body=""
	I1213 09:03:18.701304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.703652    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:19.703940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:19.704228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:19.706359    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:20.707349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:20.707349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:20.710940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:21.711541    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:21.711541    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:21.714696    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:22.715218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:22.715218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:22.718795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:23.719440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:23.719440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:23.722635    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:24.723237    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:24.723237    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:24.726985    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:25.727683    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:25.727683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:25.730456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:26.730527    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:26.730971    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:26.733842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:27.735120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:27.735493    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:27.737796    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:28.738333    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:28.738714    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.741699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:28.741800    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:28.741870    1308 type.go:168] "Request Body" body=""
	I1213 09:03:28.741870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.744398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:29.744620    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:29.744620    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:29.747986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:30.748934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:30.748934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:30.751365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:31.752294    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:31.752294    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:31.755860    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:32.756228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:32.756228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:32.758997    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:33.759818    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:33.759818    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:33.762321    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:34.763690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:34.763690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:34.770061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 09:03:35.770469    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:35.770469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:35.773118    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:36.773842    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:36.774178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:36.778885    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:03:37.302575    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 09:03:37.302575    1308 node_ready.go:38] duration metric: took 6m0.0011646s for node "functional-482100" to be "Ready" ...
	I1213 09:03:37.305847    1308 out.go:203] 
	W1213 09:03:37.307851    1308 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 09:03:37.307851    1308 out.go:285] * 
	* 
	W1213 09:03:37.311623    1308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:03:37.314310    1308 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-482100 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m10.7663446s for "functional-482100" cluster.
I1213 09:03:38.081098    2968 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (626.1159ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.4270905s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image save --daemon kicbase/echo-server:functional-213400 --alsologtostderr                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ service        │ functional-213400 service hello-node --url --format={{.IP}}                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ service        │ functional-213400 service hello-node --url                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ addons         │ functional-213400 addons list                                                                                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ addons         │ functional-213400 addons list -o json                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ license        │                                                                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-213400 --alsologtostderr -v=1                                                          │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format short --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh            │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image          │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete         │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start          │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start          │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:57:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:57:27.379293    1308 out.go:360] Setting OutFile to fd 1960 ...
	I1213 08:57:27.421775    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.421775    1308 out.go:374] Setting ErrFile to fd 2020...
	I1213 08:57:27.421858    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.434678    1308 out.go:368] Setting JSON to false
	I1213 08:57:27.436793    1308 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2054,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:57:27.436793    1308 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:57:27.440227    1308 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:57:27.444177    1308 notify.go:221] Checking for updates...
	I1213 08:57:27.444177    1308 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:27.446958    1308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:57:27.448893    1308 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:57:27.451179    1308 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:57:27.453000    1308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:57:27.455340    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:27.456010    1308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:57:27.677552    1308 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:57:27.681550    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:27.918123    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:27.897746454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:27.922386    1308 out.go:179] * Using the docker driver based on existing profile
	I1213 08:57:27.925483    1308 start.go:309] selected driver: docker
	I1213 08:57:27.925483    1308 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:27.925483    1308 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:57:27.931484    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:28.158174    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:28.141185883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:28.238865    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:28.238865    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:28.239498    1308 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:28.243527    1308 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:57:28.245818    1308 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:57:28.247303    1308 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:57:28.251374    1308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:57:28.251465    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:28.251634    1308 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:57:28.251673    1308 cache.go:65] Caching tarball of preloaded images
	I1213 08:57:28.251673    1308 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:57:28.251673    1308 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:57:28.251673    1308 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:57:28.331506    1308 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:57:28.331506    1308 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:57:28.331506    1308 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:57:28.331506    1308 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:57:28.331506    1308 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-482100"
	I1213 08:57:28.331506    1308 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:57:28.331506    1308 fix.go:54] fixHost starting: 
	I1213 08:57:28.338850    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:28.394405    1308 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 08:57:28.394453    1308 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:57:28.397828    1308 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 08:57:28.397828    1308 machine.go:94] provisionDockerMachine start ...
	I1213 08:57:28.401414    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.456355    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.457085    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.457134    1308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:57:28.656820    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.656820    1308 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:57:28.660505    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.713653    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.714127    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.714127    1308 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:57:28.912851    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.916558    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.972916    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.973035    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.973035    1308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:57:29.158720    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:29.158720    1308 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:57:29.158720    1308 ubuntu.go:190] setting up certificates
	I1213 08:57:29.158720    1308 provision.go:84] configureAuth start
	I1213 08:57:29.162705    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:29.217525    1308 provision.go:143] copyHostCerts
	I1213 08:57:29.217525    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1213 08:57:29.217525    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:57:29.217525    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:57:29.218193    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:57:29.218931    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1213 08:57:29.219078    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:57:29.219114    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:57:29.219299    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:57:29.220064    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:57:29.220064    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:57:29.220972    1308 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:57:29.312824    1308 provision.go:177] copyRemoteCerts
	I1213 08:57:29.317163    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:57:29.320164    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.370164    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:29.504512    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1213 08:57:29.504655    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:57:29.542721    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1213 08:57:29.542721    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:57:29.574672    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1213 08:57:29.574672    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:57:29.604045    1308 provision.go:87] duration metric: took 445.3221ms to configureAuth
	I1213 08:57:29.604045    1308 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:57:29.605053    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:29.610417    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.666069    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.666532    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.666532    1308 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:57:29.836610    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:57:29.836610    1308 ubuntu.go:71] root file system type: overlay
	I1213 08:57:29.836610    1308 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:57:29.840760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.894590    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.895592    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.895592    1308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:57:30.101134    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:57:30.105760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.161736    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:30.162318    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:30.162318    1308 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:57:30.345094    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:30.345094    1308 machine.go:97] duration metric: took 1.947253s to provisionDockerMachine
	I1213 08:57:30.345094    1308 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:57:30.345094    1308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:57:30.349348    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:57:30.352292    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.407399    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.537367    1308 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:57:30.545885    1308 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_ID="12"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:57:30.545957    1308 command_runner.go:130] > ID=debian
	I1213 08:57:30.545957    1308 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:57:30.545957    1308 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:57:30.545957    1308 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:57:30.546095    1308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:57:30.546117    1308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:57:30.546141    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:57:30.546161    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:57:30.546880    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:57:30.546880    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I1213 08:57:30.547539    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:57:30.547539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I1213 08:57:30.551732    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:57:30.565806    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:57:30.596092    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:57:30.624821    1308 start.go:296] duration metric: took 279.7253ms for postStartSetup
	I1213 08:57:30.629883    1308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:57:30.633087    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.686590    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.807695    1308 command_runner.go:130] > 1%
	I1213 08:57:30.812335    1308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:57:30.820851    1308 command_runner.go:130] > 950G
	I1213 08:57:30.820851    1308 fix.go:56] duration metric: took 2.4893282s for fixHost
	I1213 08:57:30.820851    1308 start.go:83] releasing machines lock for "functional-482100", held for 2.4893282s
	I1213 08:57:30.824237    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:30.876765    1308 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:57:30.881324    1308 ssh_runner.go:195] Run: cat /version.json
	I1213 08:57:30.881371    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.884518    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:31.066730    1308 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1213 08:57:31.066730    1308 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:57:31.066730    1308 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:57:31.071708    1308 ssh_runner.go:195] Run: systemctl --version
	I1213 08:57:31.084553    1308 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:57:31.084640    1308 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:57:31.090087    1308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:57:31.099561    1308 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:57:31.100565    1308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:57:31.105214    1308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:57:31.124077    1308 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:57:31.124077    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.124077    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.124648    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:31.147852    1308 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:57:31.152021    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:57:31.174172    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1213 08:57:31.176576    1308 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:57:31.176576    1308 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:57:31.189695    1308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:57:31.194128    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:57:31.213650    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.232544    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:57:31.252203    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.274175    1308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:57:31.296706    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:57:31.315777    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:57:31.334664    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:57:31.355488    1308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:57:31.369376    1308 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:57:31.373398    1308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:57:31.391830    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:31.608372    1308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:57:31.906123    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.906123    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.911089    1308 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:57:31.932611    1308 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > [Unit]
	I1213 08:57:31.933145    1308 command_runner.go:130] > Description=Docker Application Container Engine
	I1213 08:57:31.933145    1308 command_runner.go:130] > Documentation=https://docs.docker.com
	I1213 08:57:31.933145    1308 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1213 08:57:31.933145    1308 command_runner.go:130] > Wants=network-online.target containerd.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > Requires=docker.socket
	I1213 08:57:31.933145    1308 command_runner.go:130] > StartLimitBurst=3
	I1213 08:57:31.933239    1308 command_runner.go:130] > StartLimitIntervalSec=60
	I1213 08:57:31.933239    1308 command_runner.go:130] > [Service]
	I1213 08:57:31.933239    1308 command_runner.go:130] > Type=notify
	I1213 08:57:31.933239    1308 command_runner.go:130] > Restart=always
	I1213 08:57:31.933239    1308 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1213 08:57:31.933239    1308 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1213 08:57:31.933303    1308 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1213 08:57:31.933336    1308 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1213 08:57:31.933336    1308 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1213 08:57:31.933336    1308 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1213 08:57:31.933336    1308 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1213 08:57:31.933336    1308 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1213 08:57:31.933336    1308 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1213 08:57:31.933415    1308 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1213 08:57:31.933498    1308 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNOFILE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNPROC=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitCORE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1213 08:57:31.933498    1308 command_runner.go:130] > TasksMax=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > TimeoutStartSec=0
	I1213 08:57:31.933572    1308 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1213 08:57:31.933591    1308 command_runner.go:130] > Delegate=yes
	I1213 08:57:31.933591    1308 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1213 08:57:31.933591    1308 command_runner.go:130] > KillMode=process
	I1213 08:57:31.933591    1308 command_runner.go:130] > OOMScoreAdjust=-500
	I1213 08:57:31.933591    1308 command_runner.go:130] > [Install]
	I1213 08:57:31.933591    1308 command_runner.go:130] > WantedBy=multi-user.target
	I1213 08:57:31.938295    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:31.960377    1308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:57:32.049121    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:32.071680    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:57:32.093496    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:32.115103    1308 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1213 08:57:32.119951    1308 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:57:32.126371    1308 command_runner.go:130] > /usr/bin/cri-dockerd
	I1213 08:57:32.130902    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:57:32.144169    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:57:32.170348    1308 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:57:32.320163    1308 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:57:32.454851    1308 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:57:32.454851    1308 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:57:32.483674    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:57:32.505831    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:32.661991    1308 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:57:33.665330    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:57:33.689450    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:57:33.711087    1308 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 08:57:33.739462    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:33.760714    1308 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:57:33.900242    1308 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:57:34.052335    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.188283    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:57:34.213402    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:57:34.237672    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.381154    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:57:34.499581    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:34.518141    1308 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:57:34.522686    1308 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:57:34.529494    1308 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Modify: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Change: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] >  Birth: -
	I1213 08:57:34.529494    1308 start.go:564] Will wait 60s for crictl version
	I1213 08:57:34.534224    1308 ssh_runner.go:195] Run: which crictl
	I1213 08:57:34.541202    1308 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:57:34.545269    1308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:57:34.587655    1308 command_runner.go:130] > Version:  0.1.0
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeName:  docker
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:57:34.587655    1308 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:57:34.590292    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.627699    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.631112    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.669555    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.677969    1308 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:57:34.681392    1308 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:57:34.898094    1308 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:57:34.902419    1308 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:57:34.910595    1308 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1213 08:57:34.914565    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:34.972832    1308 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:57:34.972832    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:34.977045    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.008739    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.010249    1308 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:57:35.013678    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.043903    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.044022    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.044104    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.044104    1308 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:57:35.044160    1308 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:57:35.044312    1308 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:57:35.047625    1308 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:57:35.491294    1308 command_runner.go:130] > cgroupfs
	I1213 08:57:35.491294    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:35.491294    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:35.491294    1308 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:57:35.491294    1308 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:57:35.491294    1308 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:57:35.495479    1308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubeadm
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubectl
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubelet
	I1213 08:57:35.511680    1308 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:57:35.515943    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:57:35.527808    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:57:35.545969    1308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:57:35.565749    1308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:57:35.590269    1308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:57:35.598806    1308 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:57:35.603098    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:35.752426    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:35.771354    1308 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:57:35.771354    1308 certs.go:195] generating shared ca certs ...
	I1213 08:57:35.771354    1308 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:35.771354    1308 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:57:35.772397    1308 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:57:35.772549    1308 certs.go:257] generating profile certs ...
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:57:35.773396    1308 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:57:35.773447    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:57:35.773539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:57:35.773616    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:57:35.773761    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:57:35.773831    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:57:35.773939    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:57:35.773999    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:57:35.774105    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:57:35.774559    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:57:35.774827    1308 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:57:35.774870    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:57:35.775696    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I1213 08:57:35.775842    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:57:35.807179    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:57:35.833688    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:57:35.863566    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:57:35.894920    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:57:35.921314    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:57:35.946004    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:57:35.973030    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:57:36.001405    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:57:36.027495    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:57:36.053673    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:57:36.083163    1308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:57:36.106205    1308 ssh_runner.go:195] Run: openssl version
	I1213 08:57:36.124518    1308 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:57:36.128653    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.148109    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:57:36.170644    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.184506    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.230303    1308 command_runner.go:130] > 51391683
	I1213 08:57:36.235418    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:57:36.252420    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.271009    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:57:36.291738    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.306035    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.348842    1308 command_runner.go:130] > 3ec20f2e
	I1213 08:57:36.353574    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:57:36.371994    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.390417    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:57:36.409132    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.417987    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.418020    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.422336    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.464222    1308 command_runner.go:130] > b5213941
	I1213 08:57:36.469763    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:57:36.486907    1308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:57:36.493430    1308 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: 2025-12-13 08:53:22.558756963 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Modify: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Change: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] >  Birth: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.498322    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:57:36.542775    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.547618    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:57:36.590488    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.594826    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:57:36.640226    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.644848    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:57:36.698932    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.703709    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:57:36.746225    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.751252    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:57:36.796246    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.796605    1308 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:36.800619    1308 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:57:36.835511    1308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:57:36.848084    1308 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:57:36.848084    1308 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:57:36.853050    1308 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:57:36.866011    1308 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:57:36.869675    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:36.923417    1308 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.923684    1308 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-482100" cluster setting kubeconfig missing "functional-482100" context setting]
	I1213 08:57:36.923684    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.940090    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.940688    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:36.941864    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:57:36.946352    1308 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:57:36.960987    1308 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 08:57:36.961998    1308 kubeadm.go:602] duration metric: took 113.913ms to restartPrimaryControlPlane
	I1213 08:57:36.961998    1308 kubeadm.go:403] duration metric: took 165.4668ms to StartCluster
	I1213 08:57:36.961998    1308 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.961998    1308 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.963076    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.963883    1308 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:57:36.963883    1308 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:57:36.963883    1308 addons.go:70] Setting default-storageclass=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 addons.go:70] Setting storage-provisioner=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:36.963883    1308 addons.go:239] Setting addon storage-provisioner=true in "functional-482100"
	I1213 08:57:36.963883    1308 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-482100"
	I1213 08:57:36.964406    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:36.966968    1308 out.go:179] * Verifying Kubernetes components...
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.974067    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:37.028122    1308 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:37.032121    1308 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.032121    1308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:57:37.035128    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.050133    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:37.050133    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:37.051141    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:37.051141    1308 addons.go:239] Setting addon default-storageclass=true in "functional-482100"
	I1213 08:57:37.051141    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:37.059130    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:37.090124    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.112122    1308 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.112122    1308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:57:37.115122    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.124126    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:37.163123    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.218965    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.244846    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.292847    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.297857    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.298846    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 retry.go:31] will retry after 278.997974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 node_ready.go:35] waiting up to 6m0s for node "functional-482100" to be "Ready" ...
	I1213 08:57:37.298846    1308 type.go:168] "Request Body" body=""
	I1213 08:57:37.298846    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:37.300855    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:37.389624    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.394960    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.394960    1308 retry.go:31] will retry after 212.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.583432    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.612508    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.662694    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.668089    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.668089    1308 retry.go:31] will retry after 421.785382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.691227    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.696684    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.696684    1308 retry.go:31] will retry after 387.963958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.090409    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.094708    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.167644    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.172931    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.172931    1308 retry.go:31] will retry after 654.783355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.174195    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.178117    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.178179    1308 retry.go:31] will retry after 288.314182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.301152    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:38.301683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:38.304388    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:38.472962    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.544996    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.548547    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.548623    1308 retry.go:31] will retry after 1.098701937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.833272    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.912142    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.912142    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.912142    1308 retry.go:31] will retry after 808.399476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.305249    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:39.305249    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:39.308473    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:39.652260    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:39.721531    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726229    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 08:57:39.726899    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726899    1308 retry.go:31] will retry after 1.580407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.799856    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:39.802238    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.802238    1308 retry.go:31] will retry after 1.163449845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:40.308791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:40.308791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:40.310792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:40.971107    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:41.051235    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.056481    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.056595    1308 retry.go:31] will retry after 2.292483012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.312219    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:41.312219    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:41.313763    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:41.315446    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:41.385280    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.389328    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.389328    1308 retry.go:31] will retry after 2.10655749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:42.316064    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:42.316469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:42.319430    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.319659    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:43.319659    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:43.322154    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.354119    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:43.424936    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.428566    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.428566    1308 retry.go:31] will retry after 2.451441131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.500768    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:43.577861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.581800    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.581870    1308 retry.go:31] will retry after 1.842575818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:44.322393    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:44.322393    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:44.326064    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.326352    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:45.326352    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:45.329823    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.430441    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:45.504084    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.509721    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.509813    1308 retry.go:31] will retry after 3.320490506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.885819    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:45.962560    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.966882    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.966882    1308 retry.go:31] will retry after 5.131341184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:46.330362    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:46.330362    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:46.333170    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:47.333778    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:47.333778    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.337260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:57:47.337260    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:47.337260    1308 type.go:168] "Request Body" body=""
	I1213 08:57:47.337260    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.340404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.340937    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:48.340937    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:48.344443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.835623    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:48.914169    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:48.918486    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:48.918486    1308 retry.go:31] will retry after 6.605490232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:49.345162    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:49.345162    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:49.347526    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:50.348478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:50.348478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:50.351813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:51.103982    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:51.174396    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:51.177073    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.177136    1308 retry.go:31] will retry after 4.217545245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.352019    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:51.352363    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:51.354826    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:52.355908    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:52.355908    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:52.358993    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:53.359347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:53.359730    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:53.362425    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:54.363245    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:54.363536    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:54.366267    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:55.367715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:55.367715    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:55.371143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:55.400351    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:55.476385    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.480063    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.480122    1308 retry.go:31] will retry after 11.422205159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.528824    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:55.599872    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.604580    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.604626    1308 retry.go:31] will retry after 13.338795854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:56.371517    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:56.371517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:56.375228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:57.375899    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:57.375899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.378899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:57:57.379427    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:57.379613    1308 type.go:168] "Request Body" body=""
	I1213 08:57:57.379640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.381380    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 08:57:58.382025    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:58.382025    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:58.385451    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:59.385982    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:59.386304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:59.388570    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:00.389156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:00.389156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:00.393493    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:01.394059    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:01.394059    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:01.397148    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:02.397228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:02.397593    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:02.400363    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:03.400715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:03.401100    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:03.403595    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:04.404146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:04.404146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:04.407029    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:05.407299    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:05.407299    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:05.409705    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:06.410552    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:06.410552    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:06.413575    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:06.907694    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:06.989453    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:06.993505    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:06.993505    1308 retry.go:31] will retry after 9.12046724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:07.413861    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:07.413861    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.423766    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	W1213 08:58:07.423766    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:07.423766    1308 type.go:168] "Request Body" body=""
	I1213 08:58:07.423766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.426420    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.426748    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:08.426748    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:08.429523    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.949269    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:09.021443    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:09.021574    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.021574    1308 retry.go:31] will retry after 18.212645226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.429654    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:09.429654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:09.434475    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:10.434763    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:10.434763    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:10.438337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:11.438992    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:11.438992    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:11.442157    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:12.442370    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:12.442370    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:12.445441    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:13.446557    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:13.446557    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:13.449579    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:14.449909    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:14.449909    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:14.453875    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:15.453999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:15.454347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:15.457109    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:16.119722    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:16.199861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:16.203796    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.203841    1308 retry.go:31] will retry after 32.127892546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.457492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:16.457492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:16.460671    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:17.461098    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:17.461098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.464303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:17.464392    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:17.464557    1308 type.go:168] "Request Body" body=""
	I1213 08:58:17.464596    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.466792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:18.467178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:18.467178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:18.471411    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:19.472813    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:19.472813    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:19.475365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:20.475825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:20.475825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:20.478756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:21.479284    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:21.479284    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:21.482725    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:22.483047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:22.483047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:22.486928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:23.487680    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:23.487680    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:23.491133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:24.491850    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:24.492121    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:24.495131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:25.495436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:25.495893    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:25.498242    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:26.498882    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:26.498882    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:26.501986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:27.239685    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:27.315134    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:27.318446    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.318446    1308 retry.go:31] will retry after 22.292291086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.502907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:27.502907    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.505700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:27.505700    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:27.505700    1308 type.go:168] "Request Body" body=""
	I1213 08:58:27.505700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.508521    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:28.509510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:28.509510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:28.512707    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:29.513169    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:29.513169    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:29.516081    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:30.517601    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:30.517601    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:30.520368    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:31.520700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:31.521119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:31.524120    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:32.524848    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:32.524848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:32.528137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:33.529023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:33.529412    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:33.532996    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:34.533392    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:34.533697    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:34.536406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:35.536910    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:35.536910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:35.539801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:36.540290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:36.540290    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:36.543462    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:37.544092    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:37.544398    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.547080    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:37.547165    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:37.547240    1308 type.go:168] "Request Body" body=""
	I1213 08:58:37.547322    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.549686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:38.550568    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:38.550568    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:38.554061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:39.554545    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:39.554545    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:39.556910    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:40.557343    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:40.557343    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:40.562456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 08:58:41.563271    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:41.563271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:41.566401    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:42.566676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:42.566676    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:42.569495    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:43.570436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:43.570436    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:43.573856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:44.574034    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:44.574034    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:44.576971    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:45.577736    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:45.577736    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:45.580563    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:46.580998    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:46.580998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:46.584404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:47.585574    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:47.585574    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.589116    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:47.589116    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:47.589285    1308 type.go:168] "Request Body" body=""
	I1213 08:58:47.589330    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.591421    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:48.337063    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:48.419155    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:48.419236    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.419312    1308 retry.go:31] will retry after 42.344315794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.592137    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:48.592503    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:48.594564    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.594849    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:49.594849    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:49.598177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.616306    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:49.690748    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:49.696226    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:49.696226    1308 retry.go:31] will retry after 43.889805704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:50.598940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:50.598940    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:50.602650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:51.602781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:51.602781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:51.606654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:52.607136    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:52.607136    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:52.610410    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:53.610695    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:53.611291    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:53.614086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:54.614262    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:54.614262    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:54.617596    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:55.618389    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:55.618389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:55.621130    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:56.621484    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:56.621936    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:56.626456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:57.626653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:57.626653    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.630131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:57.630131    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:57.630323    1308 type.go:168] "Request Body" body=""
	I1213 08:58:57.630411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.632861    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:58.633441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:58.634089    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:58.637246    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:59.637793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:59.638147    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:59.641409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:00.641531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:00.641871    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:00.644335    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:01.644762    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:01.644762    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:01.647872    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:02.648069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:02.648069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:02.651180    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:03.651302    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:03.651302    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:03.654332    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:04.654665    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:04.654665    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:04.657952    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:05.658178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:05.658178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:05.662672    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:06.663347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:06.663347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:06.666728    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:07.667532    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:07.667885    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.670688    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:07.670852    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:07.670996    1308 type.go:168] "Request Body" body=""
	I1213 08:59:07.671070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.675143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:08.675540    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:08.675540    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:08.679392    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:09.679704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:09.679704    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:09.683514    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:10.683721    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:10.683721    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:10.686924    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:11.687492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:11.687492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:11.691432    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:12.692349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:12.692349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:12.695226    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:13.696218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:13.696218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:13.699830    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:14.700112    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:14.700547    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:14.704305    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:15.704907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:15.705360    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:15.708341    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:16.709464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:16.709464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:16.712813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:17.713633    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:17.713633    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.716674    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:17.716674    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:17.717197    1308 type.go:168] "Request Body" body=""
	I1213 08:59:17.717271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.719337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:18.719797    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:18.720188    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:18.722856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:19.723497    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:19.723497    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:19.726804    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:20.727069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:20.727069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:20.730372    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:21.730641    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:21.731118    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:21.733699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:22.734052    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:22.734386    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:22.736600    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:23.737452    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:23.737452    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:23.741063    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:24.741313    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:24.741698    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:24.744012    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:25.745187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:25.745474    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:25.747751    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:26.748382    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:26.748382    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:26.751104    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:27.752196    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:27.752196    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.755077    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:27.755077    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:27.755077    1308 type.go:168] "Request Body" body=""
	I1213 08:59:27.755077    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.757683    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:28.757960    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:28.757960    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:28.760904    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:29.761133    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:29.761133    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:29.763899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:30.764845    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:30.764845    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:30.768756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:30.770278    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:59:31.058703    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:31.769017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:31.769017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:31.771943    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:32.772681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:32.772681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:32.775498    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:33.593527    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:59:33.670412    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:33.680151    1308 out.go:179] * Enabled addons: 
	I1213 08:59:33.683381    1308 addons.go:530] duration metric: took 1m56.7187029s for enable addons: enabled=[]
	I1213 08:59:33.775980    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:33.775980    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:33.778203    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:34.778690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:34.778690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:34.781770    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:35.781942    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:35.781942    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:35.784945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:36.785401    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:36.785401    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:36.788220    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:37.789140    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:37.789517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.792181    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:37.792181    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:37.792181    1308 type.go:168] "Request Body" body=""
	I1213 08:59:37.792181    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.795747    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:38.796718    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:38.797057    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:38.799771    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:39.801087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:39.801087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:39.804280    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:40.804974    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:40.804974    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:40.809756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:41.810478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:41.810478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:41.813284    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:42.814032    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:42.814032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:42.816801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:43.817275    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:43.817275    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:43.820177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:44.821316    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:44.821679    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:44.824610    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:45.825090    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:45.825090    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:45.828480    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:46.828785    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:46.828785    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:46.832994    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:47.833561    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:47.833561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.837174    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:47.837252    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:47.837252    1308 type.go:168] "Request Body" body=""
	I1213 08:59:47.837252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.840387    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:48.841463    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:48.841463    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:48.844528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:49.844825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:49.844825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:49.847708    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:50.848539    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:50.848539    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:50.851697    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:51.852049    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:51.852049    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:51.855078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:52.855723    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:52.855723    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:52.859507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:53.860458    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:53.860752    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:53.863700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:54.864320    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:54.864320    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:54.867415    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:55.868074    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:55.868074    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:55.871228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:56.871814    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:56.871814    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:56.874839    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:57.875317    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:57.875317    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.878442    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:57.878519    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:57.878660    1308 type.go:168] "Request Body" body=""
	I1213 08:59:57.878732    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.881312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:58.881649    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:58.881649    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:58.885078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:59.885307    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:59.885744    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:59.889776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:00.889947    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:00.889947    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:00.893124    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:01.893793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:01.893793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:01.897038    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:02.898350    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:02.898350    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:02.901920    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:03.902381    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:03.902770    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:03.905459    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:04.905970    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:04.905970    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:04.909229    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:05.909755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:05.909755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:05.912940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:06.913579    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:06.913910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:06.916237    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:07.917204    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:07.917204    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.920022    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:07.920154    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:07.920228    1308 type.go:168] "Request Body" body=""
	I1213 09:00:07.920228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.922914    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:08.923390    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:08.923390    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:08.927291    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:09.927446    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:09.927446    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:09.930102    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:10.931065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:10.931065    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:10.933557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:11.934252    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:11.934252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:11.937814    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:12.938281    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:12.938281    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:12.941107    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:13.941334    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:13.941334    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:13.944744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:14.945212    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:14.945453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:14.948167    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:15.948418    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:15.948418    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:15.951926    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:16.952429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:16.952429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:16.955133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:17.955919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:17.956313    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.960472    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:00:17.960625    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:17.960654    1308 type.go:168] "Request Body" body=""
	I1213 09:00:17.960654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.962719    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:18.964051    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:18.964051    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:18.966984    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:19.967156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:19.967156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:19.970136    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:20.970477    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:20.970477    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:20.973611    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:21.974523    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:21.974905    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:21.977137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:22.977950    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:22.978189    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:22.981086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:23.982033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:23.982033    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:23.984606    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:24.985513    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:24.985513    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:24.988324    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:25.988590    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:25.988590    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:25.991603    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:26.992676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:26.992930    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:26.994776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:27.995464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:27.995464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:27.998145    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:27.998665    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:27.998851    1308 type.go:168] "Request Body" body=""
	I1213 09:00:27.998921    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:28.001309    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:29.001457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:29.001457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:29.004871    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:30.005255    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:30.005617    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:30.008184    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:31.008410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:31.008410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:31.011873    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:32.012490    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:32.012848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:32.015810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:33.016170    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:33.016170    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:33.017538    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:34.019586    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:34.019586    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:34.022235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:35.022955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:35.022955    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:35.026485    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:36.027689    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:36.027689    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:36.030650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:37.031027    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:37.031027    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:37.034168    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:38.034637    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:38.034637    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.038073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:38.038187    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:38.038337    1308 type.go:168] "Request Body" body=""
	I1213 09:00:38.038396    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.040656    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:39.041187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:39.041187    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:39.044199    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:40.044653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:40.045022    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:40.048245    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:41.048510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:41.048510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:41.052268    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:42.053226    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:42.053226    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:42.056222    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:43.056546    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:43.056546    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:43.059398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:44.059625    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:44.059625    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:44.062923    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:45.063384    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:45.063384    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:45.066631    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:46.067306    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:46.067306    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:46.070443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:47.070777    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:47.070777    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:47.073795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:48.074558    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:48.074558    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.077853    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:48.077917    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:48.078016    1308 type.go:168] "Request Body" body=""
	I1213 09:00:48.078098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.080934    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:49.082070    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:49.082070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:49.084982    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:50.085640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:50.085640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:50.088925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:51.089700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:51.089700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:51.092744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:52.093791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:52.093791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:52.096573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:53.097781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:53.097781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:53.100957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:54.101759    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:54.101759    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:54.104615    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:55.105494    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:55.105919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:55.109444    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:56.110146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:56.110146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:56.114930    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:00:57.115147    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:57.115467    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:57.118438    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:58.119483    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:58.119483    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.122648    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:58.122648    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:58.122648    1308 type.go:168] "Request Body" body=""
	I1213 09:00:58.123185    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.125195    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:59.125875    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:59.125875    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:59.129393    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:00.129668    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:00.129668    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:00.132627    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:01.133033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:01.133525    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:01.136658    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:02.137163    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:02.137163    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:02.140403    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:03.140588    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:03.140588    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:03.143578    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:04.144312    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:04.144312    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:04.147391    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:05.148065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:05.148453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:05.152235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:06.152555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:06.152555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:06.155862    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:07.156337    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:07.156337    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:07.159561    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:08.160007    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:08.160007    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.163399    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:01:08.163399    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:08.163399    1308 type.go:168] "Request Body" body=""
	I1213 09:01:08.163399    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.165301    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:09.166036    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:09.166036    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:09.169312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:10.170153    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:10.170153    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:10.173337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:11.173766    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:11.173766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:11.176583    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:12.177289    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:12.177289    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:12.180992    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:13.181441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:13.181441    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:13.183966    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:14.185028    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:14.185028    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:14.189060    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:15.189819    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:15.190274    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:15.193013    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:16.193531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:16.193531    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:16.196639    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:17.197877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:17.197877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:17.201511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:18.201776    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:18.201776    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.204748    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:18.204825    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:18.204913    1308 type.go:168] "Request Body" body=""
	I1213 09:01:18.204983    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.206713    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:19.207179    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:19.207179    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:19.210389    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:20.210678    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:20.210678    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:20.213343    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:21.213955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:21.214383    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:21.217244    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:22.217764    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:22.217764    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:22.221016    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:23.221538    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:23.222082    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:23.225141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:24.225563    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:24.225563    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:24.228842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:25.229501    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:25.229896    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:25.232481    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:26.232855    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:26.232855    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:26.235225    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:27.235999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:27.235999    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:27.239007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:28.239290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:28.239796    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.242163    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:28.242163    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:28.242754    1308 type.go:168] "Request Body" body=""
	I1213 09:01:28.242754    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.245406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:29.246227    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:29.246227    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:29.249049    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:30.249528    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:30.249528    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:30.252945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:31.253720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:31.253720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:31.257007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:32.257727    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:32.257727    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:32.260807    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:33.261355    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:33.261355    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:33.264412    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:34.265479    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:34.265479    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:34.268382    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:35.269039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:35.269258    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:35.271838    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:36.272075    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:36.272075    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:36.275197    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:37.275934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:37.275934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:37.280528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:38.281387    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:38.281707    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.284450    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:38.284566    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:38.284566    1308 type.go:168] "Request Body" body=""
	I1213 09:01:38.284566    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.287277    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:39.287457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:39.287457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:39.290889    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:40.291630    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:40.291630    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:40.295337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:41.295926    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:41.296353    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:41.299053    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:42.300178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:42.300178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:42.303160    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:43.304403    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:43.305041    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:43.309194    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:44.310087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:44.310087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:44.312799    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:45.313738    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:45.313738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:45.317911    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:46.319411    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:46.319411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:46.323036    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:47.323495    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:47.323495    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:47.326782    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:48.327222    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:48.327222    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.331951    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:01:48.331951    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:48.331951    1308 type.go:168] "Request Body" body=""
	I1213 09:01:48.331951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.336553    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:49.337686    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:49.337686    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:49.340983    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:50.342115    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:50.342115    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:50.344717    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:51.345242    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:51.345242    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:51.347895    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:52.348829    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:52.348829    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:52.353265    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:53.353621    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:53.353621    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:53.356851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:54.357643    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:54.357643    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:54.360716    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:55.361583    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:55.361583    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:55.364202    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:56.364951    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:56.364951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:56.368507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:57.368791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:57.368791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:57.373234    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:58.373801    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:58.373801    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.376426    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:58.376426    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:58.376426    1308 type.go:168] "Request Body" body=""
	I1213 09:01:58.377111    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.379740    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:59.379930    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:59.380415    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:59.383047    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:00.384221    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:00.384221    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:00.387516    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:01.388029    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:01.388029    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:01.392383    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:02.392602    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:02.392956    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:02.396482    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:03.397017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:03.397017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:03.400427    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:04.400756    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:04.400756    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:04.404303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:05.404720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:05.404720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:05.408936    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:06.409154    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:06.409154    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:06.412227    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:07.412599    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:07.412599    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:07.415247    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:08.415920    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:08.415920    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.419260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:08.419342    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:08.419400    1308 type.go:168] "Request Body" body=""
	I1213 09:02:08.419400    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.421925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:09.422119    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:09.422119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:09.424626    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:10.426518    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:10.426518    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:10.430645    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:11.431039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:11.431039    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:11.434110    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:12.434291    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:12.434618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:12.437021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:13.437858    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:13.437858    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:13.440822    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:14.441345    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:14.441345    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:14.444544    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:15.444691    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:15.444691    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:15.447957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:16.448990    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:16.448990    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:16.452282    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:17.452755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:17.452755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:17.456404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:18.456603    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:18.456603    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.459851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:18.459890    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:18.460020    1308 type.go:168] "Request Body" body=""
	I1213 09:02:18.460056    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.462654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:19.463750    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:19.463750    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:19.468416    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:20.469429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:20.469429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:20.472907    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:21.473388    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:21.473388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:21.476318    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:22.477206    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:22.477206    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:22.480424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:23.481047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:23.481047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:23.484298    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:24.484704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:24.485032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:24.487973    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:25.488079    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:25.488079    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:25.490531    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:26.491438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:26.491657    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:26.493750    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:27.494247    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:27.494247    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:27.497209    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:28.497919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:28.497919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.500562    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:28.500562    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:28.500562    1308 type.go:168] "Request Body" body=""
	I1213 09:02:28.500562    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.503165    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:29.504103    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:29.504388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:29.507059    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:30.507476    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:30.507476    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:30.510269    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:31.510555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:31.510555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:31.513340    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:32.513618    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:32.513618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:32.517141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:33.518021    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:33.518021    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:33.520581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:34.521438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:34.521438    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:34.524010    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:35.524429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:35.524429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:35.527863    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:36.528126    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:36.528517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:36.531666    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:37.532410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:37.532410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:37.534749    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:38.535244    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:38.535670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.538511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:38.538511    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:38.538511    1308 type.go:168] "Request Body" body=""
	I1213 09:02:38.538511    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.541809    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:39.542847    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:39.542998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:39.545686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:40.546120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:40.546120    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:40.548869    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:41.549367    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:41.549743    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:41.551917    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:42.552928    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:42.552928    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:42.555928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:43.556215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:43.556215    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:43.563039    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1213 09:02:44.563600    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:44.563600    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:44.566791    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:45.568004    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:45.568004    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:45.570729    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:46.570877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:46.570877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:46.573554    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:47.574011    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:47.574011    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:47.576970    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:48.577457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:48.577800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.582090    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:02:48.582143    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:48.582262    1308 type.go:168] "Request Body" body=""
	I1213 09:02:48.582389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.586235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:49.586627    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:49.586627    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:49.589468    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:50.589681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:50.589681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:50.592409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:51.593243    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:51.593243    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:51.596241    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:52.596400    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:52.596738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:52.599767    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:53.600526    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:53.600526    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:53.603709    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:54.604023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:54.604023    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:54.607315    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:55.607800    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:55.607800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:55.609797    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:56.610964    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:56.610964    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:56.613665    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:57.615191    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:57.615191    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:57.618842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:58.619640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:58.619640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.622361    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:58.622361    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:58.622361    1308 type.go:168] "Request Body" body=""
	I1213 09:02:58.622899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.625164    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:59.625440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:59.625440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:59.628095    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:00.628841    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:00.628841    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:00.632573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:01.632870    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:01.632870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:01.636028    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:02.636954    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:02.636954    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:02.640488    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:03.640838    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:03.640838    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:03.643811    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:04.644591    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:04.644591    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:04.647706    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:05.648327    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:05.648327    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:05.651557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:06.651787    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:06.651787    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:06.655775    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:07.656509    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:07.656509    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:07.659073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:08.659268    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:08.659268    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.662810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:03:08.662901    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:08.663011    1308 type.go:168] "Request Body" body=""
	I1213 09:03:08.663085    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.664787    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:03:09.665751    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:09.665872    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:09.668793    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:10.669211    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:10.669211    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:10.671961    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:11.672215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:11.672561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:11.675173    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:12.675670    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:12.675670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:12.679515    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:13.679795    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:13.679795    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:13.682609    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:14.682918    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:14.682918    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:14.685424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:15.686129    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:15.686129    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:15.690757    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:03:16.690957    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:16.690957    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:16.693958    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:17.694690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:17.694690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:17.697021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:18.697793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:18.697793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.700581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:18.701115    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:18.701215    1308 type.go:168] "Request Body" body=""
	I1213 09:03:18.701304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.703652    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:19.703940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:19.704228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:19.706359    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:20.707349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:20.707349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:20.710940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:21.711541    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:21.711541    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:21.714696    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:22.715218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:22.715218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:22.718795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:23.719440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:23.719440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:23.722635    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:24.723237    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:24.723237    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:24.726985    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:25.727683    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:25.727683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:25.730456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:26.730527    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:26.730971    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:26.733842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:27.735120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:27.735493    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:27.737796    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:28.738333    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:28.738714    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.741699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:28.741800    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:28.741870    1308 type.go:168] "Request Body" body=""
	I1213 09:03:28.741870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.744398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:29.744620    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:29.744620    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:29.747986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:30.748934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:30.748934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:30.751365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:31.752294    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:31.752294    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:31.755860    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:32.756228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:32.756228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:32.758997    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:33.759818    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:33.759818    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:33.762321    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:34.763690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:34.763690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:34.770061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 09:03:35.770469    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:35.770469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:35.773118    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:36.773842    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:36.774178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:36.778885    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:03:37.302575    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 09:03:37.302575    1308 node_ready.go:38] duration metric: took 6m0.0011646s for node "functional-482100" to be "Ready" ...
	I1213 09:03:37.305847    1308 out.go:203] 
	W1213 09:03:37.307851    1308 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 09:03:37.307851    1308 out.go:285] * 
	W1213 09:03:37.311623    1308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:03:37.314310    1308 out.go:203] 
	
	
	==> Docker <==
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525747623Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525754023Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525775925Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525849730Z" level=info msg="Initializing buildkit"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.646190196Z" level=info msg="Completed buildkit initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655073529Z" level=info msg="Daemon has completed initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655186237Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655229540Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655448956Z" level=info msg="API listen on [::]:2376"
	Dec 13 08:57:33 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:33 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 08:57:34 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Loaded network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 08:57:34 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:03:40.068702   17383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:03:40.070055   17383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:03:40.071828   17383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:03:40.073748   17383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:03:40.075096   17383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000739] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000891] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001020] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001158] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001174] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 08:57] CPU: 3 PID: 54870 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000964] RIP: 0033:0x7f5dc4ba4b20
	[  +0.000410] Code: Unable to access opcode bytes at RIP 0x7f5dc4ba4af6.
	[  +0.000689] RSP: 002b:00007ffdbe9599f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000875] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001112] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001539] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001199] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001222] FS:  0000000000000000 GS:  0000000000000000
	[  +0.961990] CPU: 3 PID: 54996 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000796] RIP: 0033:0x7f46e6061b20
	[  +0.000388] Code: Unable to access opcode bytes at RIP 0x7f46e6061af6.
	[  +0.000654] RSP: 002b:00007ffd6f1408e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000776] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:03:40 up 39 min,  0 user,  load average: 0.63, 0.41, 0.61
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:03:36 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:03:37 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 13 09:03:37 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:37 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:37 functional-482100 kubelet[17212]: E1213 09:03:37.549911   17212 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:03:37 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:03:37 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:03:38 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 13 09:03:38 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:38 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:38 functional-482100 kubelet[17226]: E1213 09:03:38.305578   17226 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:03:38 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:03:38 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:03:38 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 820.
	Dec 13 09:03:38 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:38 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:39 functional-482100 kubelet[17254]: E1213 09:03:39.046812   17254 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:03:39 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:03:39 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:03:39 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 821.
	Dec 13 09:03:39 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:39 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:03:39 functional-482100 kubelet[17326]: E1213 09:03:39.798933   17326 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:03:39 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:03:39 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (586.2836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-482100 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-482100 get po -A: exit status 1 (50.3713546s)

                                                
                                                
** stderr ** 
	E1213 09:03:51.845331    6196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:04:01.884950    6196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:04:11.927020    6196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:04:21.968935    6196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:04:32.011110    6196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-482100 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1213 09:03:51.845331    6196 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:63845/api?timeout=32s\\\": EOF\"\nE1213 09:04:01.884950    6196 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:63845/api?timeout=32s\\\": EOF\"\nE1213 09:04:11.927020    6196 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:63845/api?timeout=32s\\\": EOF\"\nE1213 09:04:21.968935    6196 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:63845/api?timeout=32s\\\": EOF\"\nE1213 09:04:32.011110    6196 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:63845/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-482100 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-482100 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (562.28ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.202734s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ image          │ functional-213400 image save --daemon kicbase/echo-server:functional-213400 --alsologtostderr                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:42 UTC │ 13 Dec 25 08:42 UTC │
	│ service        │ functional-213400 service hello-node --url --format={{.IP}}                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ service        │ functional-213400 service hello-node --url                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ addons         │ functional-213400 addons list                                                                                           │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ addons         │ functional-213400 addons list -o json                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ license        │                                                                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-213400 --alsologtostderr -v=1                                                          │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker                                         │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ start          │ -p functional-213400 --dry-run --alsologtostderr -v=1 --driver=docker                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ update-context │ functional-213400 update-context --alsologtostderr -v=2                                                                 │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format short --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh            │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image          │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image          │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete         │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start          │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start          │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:57:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:57:27.379293    1308 out.go:360] Setting OutFile to fd 1960 ...
	I1213 08:57:27.421775    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.421775    1308 out.go:374] Setting ErrFile to fd 2020...
	I1213 08:57:27.421858    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.434678    1308 out.go:368] Setting JSON to false
	I1213 08:57:27.436793    1308 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2054,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:57:27.436793    1308 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:57:27.440227    1308 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:57:27.444177    1308 notify.go:221] Checking for updates...
	I1213 08:57:27.444177    1308 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:27.446958    1308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:57:27.448893    1308 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:57:27.451179    1308 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:57:27.453000    1308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:57:27.455340    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:27.456010    1308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:57:27.677552    1308 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:57:27.681550    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:27.918123    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:27.897746454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:27.922386    1308 out.go:179] * Using the docker driver based on existing profile
	I1213 08:57:27.925483    1308 start.go:309] selected driver: docker
	I1213 08:57:27.925483    1308 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:27.925483    1308 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:57:27.931484    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:28.158174    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:28.141185883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:28.238865    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:28.238865    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:28.239498    1308 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:28.243527    1308 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:57:28.245818    1308 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:57:28.247303    1308 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:57:28.251374    1308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:57:28.251465    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:28.251634    1308 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:57:28.251673    1308 cache.go:65] Caching tarball of preloaded images
	I1213 08:57:28.251673    1308 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:57:28.251673    1308 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:57:28.251673    1308 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:57:28.331506    1308 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:57:28.331506    1308 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:57:28.331506    1308 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:57:28.331506    1308 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:57:28.331506    1308 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-482100"
	I1213 08:57:28.331506    1308 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:57:28.331506    1308 fix.go:54] fixHost starting: 
	I1213 08:57:28.338850    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:28.394405    1308 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 08:57:28.394453    1308 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:57:28.397828    1308 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 08:57:28.397828    1308 machine.go:94] provisionDockerMachine start ...
	I1213 08:57:28.401414    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.456355    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.457085    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.457134    1308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:57:28.656820    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.656820    1308 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:57:28.660505    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.713653    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.714127    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.714127    1308 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:57:28.912851    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.916558    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.972916    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.973035    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.973035    1308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:57:29.158720    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:29.158720    1308 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:57:29.158720    1308 ubuntu.go:190] setting up certificates
	I1213 08:57:29.158720    1308 provision.go:84] configureAuth start
	I1213 08:57:29.162705    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:29.217525    1308 provision.go:143] copyHostCerts
	I1213 08:57:29.217525    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1213 08:57:29.217525    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:57:29.217525    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:57:29.218193    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:57:29.218931    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1213 08:57:29.219078    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:57:29.219114    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:57:29.219299    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:57:29.220064    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:57:29.220064    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:57:29.220972    1308 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:57:29.312824    1308 provision.go:177] copyRemoteCerts
	I1213 08:57:29.317163    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:57:29.320164    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.370164    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:29.504512    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1213 08:57:29.504655    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:57:29.542721    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1213 08:57:29.542721    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:57:29.574672    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1213 08:57:29.574672    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:57:29.604045    1308 provision.go:87] duration metric: took 445.3221ms to configureAuth
	I1213 08:57:29.604045    1308 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:57:29.605053    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:29.610417    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.666069    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.666532    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.666532    1308 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:57:29.836610    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:57:29.836610    1308 ubuntu.go:71] root file system type: overlay
	I1213 08:57:29.836610    1308 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:57:29.840760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.894590    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.895592    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.895592    1308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:57:30.101134    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:57:30.105760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.161736    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:30.162318    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:30.162318    1308 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:57:30.345094    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:30.345094    1308 machine.go:97] duration metric: took 1.947253s to provisionDockerMachine
	I1213 08:57:30.345094    1308 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:57:30.345094    1308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:57:30.349348    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:57:30.352292    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.407399    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.537367    1308 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:57:30.545885    1308 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_ID="12"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:57:30.545957    1308 command_runner.go:130] > ID=debian
	I1213 08:57:30.545957    1308 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:57:30.545957    1308 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:57:30.545957    1308 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:57:30.546095    1308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:57:30.546117    1308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:57:30.546141    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:57:30.546161    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:57:30.546880    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:57:30.546880    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I1213 08:57:30.547539    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:57:30.547539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I1213 08:57:30.551732    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:57:30.565806    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:57:30.596092    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:57:30.624821    1308 start.go:296] duration metric: took 279.7253ms for postStartSetup
	I1213 08:57:30.629883    1308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:57:30.633087    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.686590    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.807695    1308 command_runner.go:130] > 1%
	I1213 08:57:30.812335    1308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:57:30.820851    1308 command_runner.go:130] > 950G
	I1213 08:57:30.820851    1308 fix.go:56] duration metric: took 2.4893282s for fixHost
	I1213 08:57:30.820851    1308 start.go:83] releasing machines lock for "functional-482100", held for 2.4893282s
	I1213 08:57:30.824237    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:30.876765    1308 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:57:30.881324    1308 ssh_runner.go:195] Run: cat /version.json
	I1213 08:57:30.881371    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.884518    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:31.066730    1308 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1213 08:57:31.066730    1308 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:57:31.066730    1308 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:57:31.071708    1308 ssh_runner.go:195] Run: systemctl --version
	I1213 08:57:31.084553    1308 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:57:31.084640    1308 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:57:31.090087    1308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:57:31.099561    1308 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:57:31.100565    1308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:57:31.105214    1308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:57:31.124077    1308 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:57:31.124077    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.124077    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.124648    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:31.147852    1308 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:57:31.152021    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:57:31.174172    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1213 08:57:31.176576    1308 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:57:31.176576    1308 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:57:31.189695    1308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:57:31.194128    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:57:31.213650    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.232544    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:57:31.252203    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.274175    1308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:57:31.296706    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:57:31.315777    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:57:31.334664    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:57:31.355488    1308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:57:31.369376    1308 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:57:31.373398    1308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:57:31.391830    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:31.608372    1308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:57:31.906123    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.906123    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.911089    1308 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:57:31.932611    1308 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > [Unit]
	I1213 08:57:31.933145    1308 command_runner.go:130] > Description=Docker Application Container Engine
	I1213 08:57:31.933145    1308 command_runner.go:130] > Documentation=https://docs.docker.com
	I1213 08:57:31.933145    1308 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1213 08:57:31.933145    1308 command_runner.go:130] > Wants=network-online.target containerd.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > Requires=docker.socket
	I1213 08:57:31.933145    1308 command_runner.go:130] > StartLimitBurst=3
	I1213 08:57:31.933239    1308 command_runner.go:130] > StartLimitIntervalSec=60
	I1213 08:57:31.933239    1308 command_runner.go:130] > [Service]
	I1213 08:57:31.933239    1308 command_runner.go:130] > Type=notify
	I1213 08:57:31.933239    1308 command_runner.go:130] > Restart=always
	I1213 08:57:31.933239    1308 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1213 08:57:31.933239    1308 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1213 08:57:31.933303    1308 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1213 08:57:31.933336    1308 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1213 08:57:31.933336    1308 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1213 08:57:31.933336    1308 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1213 08:57:31.933336    1308 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1213 08:57:31.933336    1308 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1213 08:57:31.933336    1308 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1213 08:57:31.933415    1308 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1213 08:57:31.933498    1308 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNOFILE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNPROC=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitCORE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1213 08:57:31.933498    1308 command_runner.go:130] > TasksMax=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > TimeoutStartSec=0
	I1213 08:57:31.933572    1308 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1213 08:57:31.933591    1308 command_runner.go:130] > Delegate=yes
	I1213 08:57:31.933591    1308 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1213 08:57:31.933591    1308 command_runner.go:130] > KillMode=process
	I1213 08:57:31.933591    1308 command_runner.go:130] > OOMScoreAdjust=-500
	I1213 08:57:31.933591    1308 command_runner.go:130] > [Install]
	I1213 08:57:31.933591    1308 command_runner.go:130] > WantedBy=multi-user.target
	I1213 08:57:31.938295    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:31.960377    1308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:57:32.049121    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:32.071680    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:57:32.093496    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:32.115103    1308 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1213 08:57:32.119951    1308 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:57:32.126371    1308 command_runner.go:130] > /usr/bin/cri-dockerd
	I1213 08:57:32.130902    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:57:32.144169    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:57:32.170348    1308 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:57:32.320163    1308 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:57:32.454851    1308 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:57:32.454851    1308 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:57:32.483674    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:57:32.505831    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:32.661991    1308 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:57:33.665330    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:57:33.689450    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:57:33.711087    1308 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 08:57:33.739462    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:33.760714    1308 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:57:33.900242    1308 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:57:34.052335    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.188283    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:57:34.213402    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:57:34.237672    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.381154    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:57:34.499581    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:34.518141    1308 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:57:34.522686    1308 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:57:34.529494    1308 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Modify: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Change: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] >  Birth: -
	I1213 08:57:34.529494    1308 start.go:564] Will wait 60s for crictl version
	I1213 08:57:34.534224    1308 ssh_runner.go:195] Run: which crictl
	I1213 08:57:34.541202    1308 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:57:34.545269    1308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:57:34.587655    1308 command_runner.go:130] > Version:  0.1.0
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeName:  docker
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:57:34.587655    1308 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:57:34.590292    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.627699    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.631112    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.669555    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.677969    1308 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:57:34.681392    1308 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:57:34.898094    1308 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:57:34.902419    1308 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:57:34.910595    1308 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1213 08:57:34.914565    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:34.972832    1308 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:57:34.972832    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:34.977045    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.008739    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.010249    1308 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:57:35.013678    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.043903    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.044022    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.044104    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.044104    1308 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:57:35.044160    1308 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:57:35.044312    1308 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:57:35.047625    1308 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:57:35.491294    1308 command_runner.go:130] > cgroupfs
	I1213 08:57:35.491294    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:35.491294    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:35.491294    1308 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:57:35.491294    1308 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:57:35.491294    1308 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:57:35.495479    1308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubeadm
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubectl
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubelet
	I1213 08:57:35.511680    1308 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:57:35.515943    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:57:35.527808    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:57:35.545969    1308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:57:35.565749    1308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:57:35.590269    1308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:57:35.598806    1308 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:57:35.603098    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:35.752426    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:35.771354    1308 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:57:35.771354    1308 certs.go:195] generating shared ca certs ...
	I1213 08:57:35.771354    1308 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:35.771354    1308 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:57:35.772397    1308 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:57:35.772549    1308 certs.go:257] generating profile certs ...
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:57:35.773396    1308 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:57:35.773447    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:57:35.773539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:57:35.773616    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:57:35.773761    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:57:35.773831    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:57:35.773939    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:57:35.773999    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:57:35.774105    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:57:35.774559    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:57:35.774827    1308 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:57:35.774870    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:57:35.775696    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I1213 08:57:35.775842    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:57:35.807179    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:57:35.833688    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:57:35.863566    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:57:35.894920    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:57:35.921314    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:57:35.946004    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:57:35.973030    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:57:36.001405    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:57:36.027495    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:57:36.053673    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:57:36.083163    1308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:57:36.106205    1308 ssh_runner.go:195] Run: openssl version
	I1213 08:57:36.124518    1308 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:57:36.128653    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.148109    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:57:36.170644    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.184506    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.230303    1308 command_runner.go:130] > 51391683
	I1213 08:57:36.235418    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:57:36.252420    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.271009    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:57:36.291738    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.306035    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.348842    1308 command_runner.go:130] > 3ec20f2e
	I1213 08:57:36.353574    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:57:36.371994    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.390417    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:57:36.409132    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.417987    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.418020    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.422336    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.464222    1308 command_runner.go:130] > b5213941
	I1213 08:57:36.469763    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:57:36.486907    1308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:57:36.493430    1308 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: 2025-12-13 08:53:22.558756963 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Modify: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Change: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] >  Birth: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.498322    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:57:36.542775    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.547618    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:57:36.590488    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.594826    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:57:36.640226    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.644848    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:57:36.698932    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.703709    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:57:36.746225    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.751252    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:57:36.796246    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.796605    1308 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:36.800619    1308 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:57:36.835511    1308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:57:36.848084    1308 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:57:36.848084    1308 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:57:36.853050    1308 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:57:36.866011    1308 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:57:36.869675    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:36.923417    1308 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.923684    1308 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-482100" cluster setting kubeconfig missing "functional-482100" context setting]
	I1213 08:57:36.923684    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.940090    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.940688    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:36.941864    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:57:36.946352    1308 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:57:36.960987    1308 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 08:57:36.961998    1308 kubeadm.go:602] duration metric: took 113.913ms to restartPrimaryControlPlane
	I1213 08:57:36.961998    1308 kubeadm.go:403] duration metric: took 165.4668ms to StartCluster
	I1213 08:57:36.961998    1308 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.961998    1308 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.963076    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.963883    1308 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:57:36.963883    1308 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:57:36.963883    1308 addons.go:70] Setting default-storageclass=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 addons.go:70] Setting storage-provisioner=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:36.963883    1308 addons.go:239] Setting addon storage-provisioner=true in "functional-482100"
	I1213 08:57:36.963883    1308 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-482100"
	I1213 08:57:36.964406    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:36.966968    1308 out.go:179] * Verifying Kubernetes components...
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.974067    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:37.028122    1308 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:37.032121    1308 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.032121    1308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:57:37.035128    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.050133    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:37.050133    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:37.051141    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:37.051141    1308 addons.go:239] Setting addon default-storageclass=true in "functional-482100"
	I1213 08:57:37.051141    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:37.059130    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:37.090124    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.112122    1308 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.112122    1308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:57:37.115122    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.124126    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:37.163123    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.218965    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.244846    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.292847    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.297857    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.298846    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 retry.go:31] will retry after 278.997974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 node_ready.go:35] waiting up to 6m0s for node "functional-482100" to be "Ready" ...
	I1213 08:57:37.298846    1308 type.go:168] "Request Body" body=""
	I1213 08:57:37.298846    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:37.300855    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:37.389624    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.394960    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.394960    1308 retry.go:31] will retry after 212.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.583432    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.612508    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.662694    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.668089    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.668089    1308 retry.go:31] will retry after 421.785382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.691227    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.696684    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.696684    1308 retry.go:31] will retry after 387.963958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.090409    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.094708    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.167644    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.172931    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.172931    1308 retry.go:31] will retry after 654.783355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.174195    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.178117    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.178179    1308 retry.go:31] will retry after 288.314182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.301152    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:38.301683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:38.304388    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:38.472962    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.544996    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.548547    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.548623    1308 retry.go:31] will retry after 1.098701937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.833272    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.912142    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.912142    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.912142    1308 retry.go:31] will retry after 808.399476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.305249    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:39.305249    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:39.308473    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:39.652260    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:39.721531    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726229    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 08:57:39.726899    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726899    1308 retry.go:31] will retry after 1.580407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.799856    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:39.802238    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.802238    1308 retry.go:31] will retry after 1.163449845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:40.308791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:40.308791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:40.310792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:40.971107    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:41.051235    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.056481    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.056595    1308 retry.go:31] will retry after 2.292483012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.312219    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:41.312219    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:41.313763    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:41.315446    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:41.385280    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.389328    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.389328    1308 retry.go:31] will retry after 2.10655749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:42.316064    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:42.316469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:42.319430    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.319659    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:43.319659    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:43.322154    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.354119    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:43.424936    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.428566    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.428566    1308 retry.go:31] will retry after 2.451441131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.500768    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:43.577861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.581800    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.581870    1308 retry.go:31] will retry after 1.842575818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:44.322393    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:44.322393    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:44.326064    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.326352    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:45.326352    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:45.329823    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.430441    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:45.504084    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.509721    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.509813    1308 retry.go:31] will retry after 3.320490506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.885819    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:45.962560    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.966882    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.966882    1308 retry.go:31] will retry after 5.131341184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:46.330362    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:46.330362    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:46.333170    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:47.333778    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:47.333778    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.337260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:57:47.337260    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:47.337260    1308 type.go:168] "Request Body" body=""
	I1213 08:57:47.337260    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.340404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.340937    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:48.340937    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:48.344443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.835623    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:48.914169    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:48.918486    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:48.918486    1308 retry.go:31] will retry after 6.605490232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:49.345162    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:49.345162    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:49.347526    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:50.348478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:50.348478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:50.351813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:51.103982    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:51.174396    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:51.177073    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.177136    1308 retry.go:31] will retry after 4.217545245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.352019    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:51.352363    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:51.354826    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:52.355908    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:52.355908    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:52.358993    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:53.359347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:53.359730    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:53.362425    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:54.363245    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:54.363536    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:54.366267    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:55.367715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:55.367715    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:55.371143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:55.400351    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:55.476385    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.480063    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.480122    1308 retry.go:31] will retry after 11.422205159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.528824    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:55.599872    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.604580    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.604626    1308 retry.go:31] will retry after 13.338795854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:56.371517    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:56.371517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:56.375228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:57.375899    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:57.375899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.378899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:57:57.379427    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:57.379613    1308 type.go:168] "Request Body" body=""
	I1213 08:57:57.379640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.381380    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 08:57:58.382025    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:58.382025    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:58.385451    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:59.385982    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:59.386304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:59.388570    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:00.389156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:00.389156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:00.393493    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:01.394059    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:01.394059    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:01.397148    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:02.397228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:02.397593    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:02.400363    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:03.400715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:03.401100    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:03.403595    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:04.404146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:04.404146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:04.407029    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:05.407299    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:05.407299    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:05.409705    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:06.410552    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:06.410552    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:06.413575    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:06.907694    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:06.989453    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:06.993505    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:06.993505    1308 retry.go:31] will retry after 9.12046724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:07.413861    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:07.413861    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.423766    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	W1213 08:58:07.423766    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:07.423766    1308 type.go:168] "Request Body" body=""
	I1213 08:58:07.423766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.426420    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.426748    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:08.426748    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:08.429523    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.949269    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:09.021443    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:09.021574    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.021574    1308 retry.go:31] will retry after 18.212645226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.429654    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:09.429654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:09.434475    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:10.434763    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:10.434763    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:10.438337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:11.438992    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:11.438992    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:11.442157    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:12.442370    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:12.442370    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:12.445441    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:13.446557    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:13.446557    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:13.449579    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:14.449909    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:14.449909    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:14.453875    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:15.453999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:15.454347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:15.457109    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:16.119722    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:16.199861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:16.203796    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.203841    1308 retry.go:31] will retry after 32.127892546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.457492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:16.457492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:16.460671    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:17.461098    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:17.461098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.464303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:17.464392    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:17.464557    1308 type.go:168] "Request Body" body=""
	I1213 08:58:17.464596    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.466792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:18.467178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:18.467178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:18.471411    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:19.472813    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:19.472813    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:19.475365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:20.475825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:20.475825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:20.478756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:21.479284    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:21.479284    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:21.482725    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:22.483047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:22.483047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:22.486928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:23.487680    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:23.487680    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:23.491133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:24.491850    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:24.492121    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:24.495131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:25.495436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:25.495893    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:25.498242    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:26.498882    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:26.498882    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:26.501986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:27.239685    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:27.315134    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:27.318446    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.318446    1308 retry.go:31] will retry after 22.292291086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.502907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:27.502907    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.505700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:27.505700    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:27.505700    1308 type.go:168] "Request Body" body=""
	I1213 08:58:27.505700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.508521    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:28.509510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:28.509510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:28.512707    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:29.513169    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:29.513169    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:29.516081    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:30.517601    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:30.517601    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:30.520368    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:31.520700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:31.521119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:31.524120    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:32.524848    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:32.524848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:32.528137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:33.529023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:33.529412    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:33.532996    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:34.533392    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:34.533697    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:34.536406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:35.536910    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:35.536910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:35.539801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:36.540290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:36.540290    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:36.543462    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:37.544092    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:37.544398    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.547080    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:37.547165    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:37.547240    1308 type.go:168] "Request Body" body=""
	I1213 08:58:37.547322    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.549686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:38.550568    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:38.550568    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:38.554061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:39.554545    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:39.554545    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:39.556910    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:40.557343    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:40.557343    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:40.562456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 08:58:41.563271    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:41.563271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:41.566401    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:42.566676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:42.566676    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:42.569495    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:43.570436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:43.570436    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:43.573856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:44.574034    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:44.574034    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:44.576971    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:45.577736    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:45.577736    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:45.580563    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:46.580998    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:46.580998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:46.584404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:47.585574    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:47.585574    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.589116    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:47.589116    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:47.589285    1308 type.go:168] "Request Body" body=""
	I1213 08:58:47.589330    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.591421    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:48.337063    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:48.419155    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:48.419236    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.419312    1308 retry.go:31] will retry after 42.344315794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.592137    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:48.592503    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:48.594564    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.594849    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:49.594849    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:49.598177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.616306    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:49.690748    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:49.696226    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:49.696226    1308 retry.go:31] will retry after 43.889805704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:50.598940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:50.598940    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:50.602650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:51.602781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:51.602781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:51.606654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:52.607136    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:52.607136    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:52.610410    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:53.610695    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:53.611291    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:53.614086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:54.614262    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:54.614262    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:54.617596    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:55.618389    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:55.618389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:55.621130    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:56.621484    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:56.621936    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:56.626456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:57.626653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:57.626653    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.630131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:57.630131    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:57.630323    1308 type.go:168] "Request Body" body=""
	I1213 08:58:57.630411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.632861    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:58.633441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:58.634089    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:58.637246    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:59.637793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:59.638147    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:59.641409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:00.641531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:00.641871    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:00.644335    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:01.644762    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:01.644762    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:01.647872    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:02.648069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:02.648069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:02.651180    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:03.651302    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:03.651302    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:03.654332    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:04.654665    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:04.654665    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:04.657952    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:05.658178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:05.658178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:05.662672    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:06.663347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:06.663347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:06.666728    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:07.667532    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:07.667885    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.670688    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:07.670852    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:07.670996    1308 type.go:168] "Request Body" body=""
	I1213 08:59:07.671070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.675143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:08.675540    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:08.675540    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:08.679392    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:09.679704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:09.679704    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:09.683514    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:10.683721    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:10.683721    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:10.686924    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:11.687492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:11.687492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:11.691432    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:12.692349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:12.692349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:12.695226    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:13.696218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:13.696218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:13.699830    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:14.700112    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:14.700547    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:14.704305    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:15.704907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:15.705360    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:15.708341    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:16.709464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:16.709464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:16.712813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:17.713633    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:17.713633    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.716674    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:17.716674    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:17.717197    1308 type.go:168] "Request Body" body=""
	I1213 08:59:17.717271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.719337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:18.719797    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:18.720188    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:18.722856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:19.723497    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:19.723497    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:19.726804    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:20.727069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:20.727069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:20.730372    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:21.730641    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:21.731118    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:21.733699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:22.734052    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:22.734386    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:22.736600    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:23.737452    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:23.737452    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:23.741063    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:24.741313    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:24.741698    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:24.744012    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:25.745187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:25.745474    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:25.747751    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:26.748382    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:26.748382    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:26.751104    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:27.752196    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:27.752196    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.755077    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:27.755077    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:27.755077    1308 type.go:168] "Request Body" body=""
	I1213 08:59:27.755077    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.757683    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:28.757960    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:28.757960    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:28.760904    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:29.761133    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:29.761133    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:29.763899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:30.764845    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:30.764845    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:30.768756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:30.770278    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:59:31.058703    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:31.769017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:31.769017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:31.771943    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:32.772681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:32.772681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:32.775498    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:33.593527    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:59:33.670412    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:33.680151    1308 out.go:179] * Enabled addons: 
	I1213 08:59:33.683381    1308 addons.go:530] duration metric: took 1m56.7187029s for enable addons: enabled=[]
	I1213 08:59:33.775980    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:33.775980    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:33.778203    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:34.778690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:34.778690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:34.781770    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:35.781942    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:35.781942    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:35.784945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:36.785401    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:36.785401    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:36.788220    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:37.789140    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:37.789517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.792181    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:37.792181    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:37.792181    1308 type.go:168] "Request Body" body=""
	I1213 08:59:37.792181    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.795747    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:38.796718    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:38.797057    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:38.799771    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:39.801087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:39.801087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:39.804280    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:40.804974    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:40.804974    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:40.809756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:41.810478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:41.810478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:41.813284    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:42.814032    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:42.814032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:42.816801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:43.817275    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:43.817275    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:43.820177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:44.821316    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:44.821679    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:44.824610    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:45.825090    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:45.825090    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:45.828480    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:46.828785    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:46.828785    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:46.832994    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:47.833561    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:47.833561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.837174    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:47.837252    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:47.837252    1308 type.go:168] "Request Body" body=""
	I1213 08:59:47.837252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.840387    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:48.841463    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:48.841463    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:48.844528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:49.844825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:49.844825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:49.847708    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:50.848539    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:50.848539    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:50.851697    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:51.852049    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:51.852049    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:51.855078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:52.855723    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:52.855723    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:52.859507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:53.860458    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:53.860752    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:53.863700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:54.864320    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:54.864320    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:54.867415    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:55.868074    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:55.868074    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:55.871228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:56.871814    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:56.871814    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:56.874839    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:57.875317    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:57.875317    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.878442    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:57.878519    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:57.878660    1308 type.go:168] "Request Body" body=""
	I1213 08:59:57.878732    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.881312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:58.881649    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:58.881649    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:58.885078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:59.885307    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:59.885744    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:59.889776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:00.889947    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:00.889947    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:00.893124    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:01.893793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:01.893793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:01.897038    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:02.898350    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:02.898350    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:02.901920    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:03.902381    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:03.902770    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:03.905459    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:04.905970    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:04.905970    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:04.909229    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:05.909755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:05.909755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:05.912940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:06.913579    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:06.913910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:06.916237    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:07.917204    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:07.917204    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.920022    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:07.920154    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:07.920228    1308 type.go:168] "Request Body" body=""
	I1213 09:00:07.920228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.922914    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:08.923390    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:08.923390    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:08.927291    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:09.927446    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:09.927446    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:09.930102    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:10.931065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:10.931065    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:10.933557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:11.934252    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:11.934252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:11.937814    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:12.938281    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:12.938281    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:12.941107    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:13.941334    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:13.941334    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:13.944744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:14.945212    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:14.945453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:14.948167    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:15.948418    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:15.948418    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:15.951926    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:16.952429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:16.952429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:16.955133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:17.955919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:17.956313    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.960472    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:00:17.960625    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:17.960654    1308 type.go:168] "Request Body" body=""
	I1213 09:00:17.960654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.962719    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:18.964051    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:18.964051    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:18.966984    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:19.967156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:19.967156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:19.970136    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:20.970477    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:20.970477    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:20.973611    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:21.974523    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:21.974905    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:21.977137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:22.977950    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:22.978189    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:22.981086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:23.982033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:23.982033    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:23.984606    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:24.985513    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:24.985513    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:24.988324    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:25.988590    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:25.988590    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:25.991603    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:26.992676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:26.992930    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:26.994776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:27.995464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:27.995464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:27.998145    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:27.998665    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:27.998851    1308 type.go:168] "Request Body" body=""
	I1213 09:00:27.998921    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:28.001309    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:29.001457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:29.001457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:29.004871    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:30.005255    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:30.005617    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:30.008184    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:31.008410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:31.008410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:31.011873    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:32.012490    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:32.012848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:32.015810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:33.016170    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:33.016170    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:33.017538    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:34.019586    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:34.019586    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:34.022235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:35.022955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:35.022955    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:35.026485    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:36.027689    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:36.027689    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:36.030650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:37.031027    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:37.031027    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:37.034168    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:38.034637    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:38.034637    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.038073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:38.038187    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:38.038337    1308 type.go:168] "Request Body" body=""
	I1213 09:00:38.038396    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.040656    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:39.041187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:39.041187    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:39.044199    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:40.044653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:40.045022    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:40.048245    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:41.048510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:41.048510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:41.052268    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:42.053226    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:42.053226    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:42.056222    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:43.056546    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:43.056546    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:43.059398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:44.059625    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:44.059625    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:44.062923    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:45.063384    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:45.063384    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:45.066631    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:46.067306    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:46.067306    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:46.070443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:47.070777    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:47.070777    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:47.073795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:48.074558    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:48.074558    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.077853    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:48.077917    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:48.078016    1308 type.go:168] "Request Body" body=""
	I1213 09:00:48.078098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.080934    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:49.082070    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:49.082070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:49.084982    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:50.085640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:50.085640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:50.088925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:51.089700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:51.089700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:51.092744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:52.093791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:52.093791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:52.096573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:53.097781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:53.097781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:53.100957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:54.101759    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:54.101759    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:54.104615    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:55.105494    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:55.105919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:55.109444    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:56.110146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:56.110146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:56.114930    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:00:57.115147    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:57.115467    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:57.118438    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:58.119483    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:58.119483    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.122648    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:58.122648    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:58.122648    1308 type.go:168] "Request Body" body=""
	I1213 09:00:58.123185    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.125195    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:59.125875    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:59.125875    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:59.129393    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:00.129668    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:00.129668    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:00.132627    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:01.133033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:01.133525    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:01.136658    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:02.137163    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:02.137163    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:02.140403    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:03.140588    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:03.140588    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:03.143578    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:04.144312    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:04.144312    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:04.147391    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:05.148065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:05.148453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:05.152235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:06.152555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:06.152555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:06.155862    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:07.156337    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:07.156337    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:07.159561    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:08.160007    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:08.160007    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.163399    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:01:08.163399    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:08.163399    1308 type.go:168] "Request Body" body=""
	I1213 09:01:08.163399    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.165301    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:09.166036    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:09.166036    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:09.169312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:10.170153    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:10.170153    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:10.173337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:11.173766    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:11.173766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:11.176583    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:12.177289    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:12.177289    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:12.180992    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:13.181441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:13.181441    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:13.183966    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:14.185028    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:14.185028    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:14.189060    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:15.189819    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:15.190274    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:15.193013    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:16.193531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:16.193531    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:16.196639    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:17.197877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:17.197877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:17.201511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:18.201776    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:18.201776    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.204748    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:18.204825    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:18.204913    1308 type.go:168] "Request Body" body=""
	I1213 09:01:18.204983    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.206713    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:19.207179    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:19.207179    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:19.210389    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:20.210678    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:20.210678    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:20.213343    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:21.213955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:21.214383    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:21.217244    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:22.217764    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:22.217764    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:22.221016    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:23.221538    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:23.222082    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:23.225141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:24.225563    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:24.225563    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:24.228842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:25.229501    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:25.229896    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:25.232481    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:26.232855    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:26.232855    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:26.235225    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:27.235999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:27.235999    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:27.239007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:28.239290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:28.239796    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.242163    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:28.242163    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:28.242754    1308 type.go:168] "Request Body" body=""
	I1213 09:01:28.242754    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.245406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:29.246227    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:29.246227    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:29.249049    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:30.249528    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:30.249528    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:30.252945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:31.253720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:31.253720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:31.257007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:32.257727    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:32.257727    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:32.260807    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:33.261355    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:33.261355    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:33.264412    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:34.265479    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:34.265479    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:34.268382    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:35.269039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:35.269258    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:35.271838    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:36.272075    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:36.272075    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:36.275197    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:37.275934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:37.275934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:37.280528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:38.281387    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:38.281707    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.284450    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:38.284566    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:38.284566    1308 type.go:168] "Request Body" body=""
	I1213 09:01:38.284566    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.287277    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:39.287457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:39.287457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:39.290889    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:40.291630    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:40.291630    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:40.295337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:41.295926    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:41.296353    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:41.299053    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:42.300178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:42.300178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:42.303160    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:43.304403    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:43.305041    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:43.309194    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:44.310087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:44.310087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:44.312799    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:45.313738    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:45.313738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:45.317911    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:46.319411    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:46.319411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:46.323036    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:47.323495    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:47.323495    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:47.326782    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:48.327222    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:48.327222    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.331951    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:01:48.331951    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:48.331951    1308 type.go:168] "Request Body" body=""
	I1213 09:01:48.331951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.336553    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:49.337686    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:49.337686    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:49.340983    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:50.342115    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:50.342115    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:50.344717    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:51.345242    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:51.345242    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:51.347895    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:52.348829    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:52.348829    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:52.353265    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:53.353621    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:53.353621    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:53.356851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:54.357643    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:54.357643    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:54.360716    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:55.361583    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:55.361583    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:55.364202    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:56.364951    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:56.364951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:56.368507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:57.368791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:57.368791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:57.373234    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:58.373801    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:58.373801    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.376426    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:58.376426    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:58.376426    1308 type.go:168] "Request Body" body=""
	I1213 09:01:58.377111    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.379740    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:59.379930    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:59.380415    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:59.383047    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:00.384221    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:00.384221    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:00.387516    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:01.388029    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:01.388029    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:01.392383    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:02.392602    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:02.392956    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:02.396482    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:03.397017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:03.397017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:03.400427    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:04.400756    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:04.400756    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:04.404303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:05.404720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:05.404720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:05.408936    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:06.409154    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:06.409154    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:06.412227    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:07.412599    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:07.412599    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:07.415247    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:08.415920    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:08.415920    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.419260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:08.419342    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:08.419400    1308 type.go:168] "Request Body" body=""
	I1213 09:02:08.419400    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.421925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:09.422119    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:09.422119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:09.424626    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:10.426518    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:10.426518    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:10.430645    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:11.431039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:11.431039    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:11.434110    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:12.434291    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:12.434618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:12.437021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:13.437858    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:13.437858    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:13.440822    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:14.441345    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:14.441345    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:14.444544    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:15.444691    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:15.444691    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:15.447957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:16.448990    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:16.448990    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:16.452282    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:17.452755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:17.452755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:17.456404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:18.456603    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:18.456603    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.459851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:18.459890    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:18.460020    1308 type.go:168] "Request Body" body=""
	I1213 09:02:18.460056    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.462654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:19.463750    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:19.463750    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:19.468416    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:20.469429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:20.469429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:20.472907    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:21.473388    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:21.473388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:21.476318    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:22.477206    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:22.477206    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:22.480424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:23.481047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:23.481047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:23.484298    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:24.484704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:24.485032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:24.487973    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:25.488079    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:25.488079    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:25.490531    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:26.491438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:26.491657    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:26.493750    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:27.494247    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:27.494247    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:27.497209    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:28.497919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:28.497919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.500562    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:28.500562    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:28.500562    1308 type.go:168] "Request Body" body=""
	I1213 09:02:28.500562    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.503165    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:29.504103    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:29.504388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:29.507059    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:30.507476    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:30.507476    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:30.510269    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:31.510555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:31.510555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:31.513340    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:32.513618    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:32.513618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:32.517141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:33.518021    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:33.518021    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:33.520581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:34.521438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:34.521438    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:34.524010    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:35.524429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:35.524429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:35.527863    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:36.528126    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:36.528517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:36.531666    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:37.532410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:37.532410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:37.534749    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:38.535244    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:38.535670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.538511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:38.538511    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:38.538511    1308 type.go:168] "Request Body" body=""
	I1213 09:02:38.538511    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.541809    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:39.542847    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:39.542998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:39.545686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:40.546120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:40.546120    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:40.548869    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:41.549367    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:41.549743    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:41.551917    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:42.552928    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:42.552928    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:42.555928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:43.556215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:43.556215    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:43.563039    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1213 09:02:44.563600    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:44.563600    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:44.566791    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:45.568004    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:45.568004    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:45.570729    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:46.570877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:46.570877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:46.573554    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:47.574011    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:47.574011    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:47.576970    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:48.577457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:48.577800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.582090    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:02:48.582143    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:48.582262    1308 type.go:168] "Request Body" body=""
	I1213 09:02:48.582389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.586235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:49.586627    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:49.586627    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:49.589468    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:50.589681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:50.589681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:50.592409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:51.593243    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:51.593243    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:51.596241    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:52.596400    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:52.596738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:52.599767    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:53.600526    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:53.600526    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:53.603709    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:54.604023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:54.604023    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:54.607315    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:55.607800    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:55.607800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:55.609797    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:56.610964    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:56.610964    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:56.613665    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:57.615191    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:57.615191    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:57.618842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:58.619640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:58.619640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.622361    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:58.622361    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:58.622361    1308 type.go:168] "Request Body" body=""
	I1213 09:02:58.622899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.625164    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:59.625440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:59.625440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:59.628095    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:00.628841    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:00.628841    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:00.632573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:01.632870    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:01.632870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:01.636028    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:02.636954    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:02.636954    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:02.640488    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:03.640838    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:03.640838    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:03.643811    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:04.644591    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:04.644591    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:04.647706    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:05.648327    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:05.648327    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:05.651557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:06.651787    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:06.651787    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:06.655775    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:07.656509    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:07.656509    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:07.659073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:08.659268    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:08.659268    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.662810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:03:08.662901    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:08.663011    1308 type.go:168] "Request Body" body=""
	I1213 09:03:08.663085    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.664787    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:03:09.665751    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:09.665872    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:09.668793    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:10.669211    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:10.669211    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:10.671961    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:11.672215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:11.672561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:11.675173    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:12.675670    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:12.675670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:12.679515    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:13.679795    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:13.679795    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:13.682609    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:14.682918    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:14.682918    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:14.685424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:15.686129    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:15.686129    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:15.690757    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:03:16.690957    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:16.690957    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:16.693958    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:17.694690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:17.694690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:17.697021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:18.697793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:18.697793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.700581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:18.701115    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:18.701215    1308 type.go:168] "Request Body" body=""
	I1213 09:03:18.701304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.703652    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:19.703940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:19.704228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:19.706359    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:20.707349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:20.707349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:20.710940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:21.711541    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:21.711541    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:21.714696    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:22.715218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:22.715218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:22.718795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:23.719440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:23.719440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:23.722635    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:24.723237    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:24.723237    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:24.726985    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:25.727683    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:25.727683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:25.730456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:26.730527    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:26.730971    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:26.733842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:27.735120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:27.735493    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:27.737796    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:28.738333    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:28.738714    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.741699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:28.741800    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:28.741870    1308 type.go:168] "Request Body" body=""
	I1213 09:03:28.741870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.744398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:29.744620    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:29.744620    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:29.747986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:30.748934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:30.748934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:30.751365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:31.752294    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:31.752294    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:31.755860    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:32.756228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:32.756228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:32.758997    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:33.759818    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:33.759818    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:33.762321    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:34.763690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:34.763690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:34.770061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 09:03:35.770469    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:35.770469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:35.773118    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:36.773842    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:36.774178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:36.778885    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:03:37.302575    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 09:03:37.302575    1308 node_ready.go:38] duration metric: took 6m0.0011646s for node "functional-482100" to be "Ready" ...
	I1213 09:03:37.305847    1308 out.go:203] 
	W1213 09:03:37.307851    1308 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 09:03:37.307851    1308 out.go:285] * 
	W1213 09:03:37.311623    1308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:03:37.314310    1308 out.go:203] 
	
	
	==> Docker <==
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525747623Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525754023Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525775925Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525849730Z" level=info msg="Initializing buildkit"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.646190196Z" level=info msg="Completed buildkit initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655073529Z" level=info msg="Daemon has completed initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655186237Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655229540Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655448956Z" level=info msg="API listen on [::]:2376"
	Dec 13 08:57:33 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:33 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 08:57:34 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Loaded network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 08:57:34 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:04:33.690371   18364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:04:33.691398   18364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:04:33.692504   18364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:04:33.693536   18364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:04:33.694423   18364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000739] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000891] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001020] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001158] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001174] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 08:57] CPU: 3 PID: 54870 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000964] RIP: 0033:0x7f5dc4ba4b20
	[  +0.000410] Code: Unable to access opcode bytes at RIP 0x7f5dc4ba4af6.
	[  +0.000689] RSP: 002b:00007ffdbe9599f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000875] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001112] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001539] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001199] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001222] FS:  0000000000000000 GS:  0000000000000000
	[  +0.961990] CPU: 3 PID: 54996 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000796] RIP: 0033:0x7f46e6061b20
	[  +0.000388] Code: Unable to access opcode bytes at RIP 0x7f46e6061af6.
	[  +0.000654] RSP: 002b:00007ffd6f1408e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000776] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:04:33 up 40 min,  0 user,  load average: 0.44, 0.39, 0.59
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:04:30 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:30 functional-482100 kubelet[18206]: E1213 09:04:30.793005   18206 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:04:30 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:04:30 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:04:31 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 890.
	Dec 13 09:04:31 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:31 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:31 functional-482100 kubelet[18218]: E1213 09:04:31.535594   18218 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:04:31 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:04:31 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:04:32 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 891.
	Dec 13 09:04:32 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:32 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:32 functional-482100 kubelet[18230]: E1213 09:04:32.301176   18230 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:04:32 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:04:32 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:04:32 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 892.
	Dec 13 09:04:32 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:32 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:33 functional-482100 kubelet[18258]: E1213 09:04:33.048844   18258 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:04:33 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:04:33 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:04:33 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 893.
	Dec 13 09:04:33 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:04:33 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (569.8278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 kubectl -- --context functional-482100 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 kubectl -- --context functional-482100 get pods: exit status 1 (50.5947622s)

                                                
                                                
** stderr ** 
	E1213 09:05:05.031809    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:05:15.119390    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:05:25.162306    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:05:35.204040    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:05:45.243892    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-482100 kubectl -- --context functional-482100 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (660.1644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.6159682s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-213400 image ls --format short --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh     │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image   │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete  │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start   │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start   │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.1                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.3                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:latest                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add minikube-local-cache-test:functional-482100                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache delete minikube-local-cache-test:functional-482100                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl images                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ cache   │ functional-482100 cache reload                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ kubectl │ functional-482100 kubectl -- --context functional-482100 get pods                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:57:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:57:27.379293    1308 out.go:360] Setting OutFile to fd 1960 ...
	I1213 08:57:27.421775    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.421775    1308 out.go:374] Setting ErrFile to fd 2020...
	I1213 08:57:27.421858    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.434678    1308 out.go:368] Setting JSON to false
	I1213 08:57:27.436793    1308 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2054,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:57:27.436793    1308 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:57:27.440227    1308 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:57:27.444177    1308 notify.go:221] Checking for updates...
	I1213 08:57:27.444177    1308 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:27.446958    1308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:57:27.448893    1308 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:57:27.451179    1308 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:57:27.453000    1308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:57:27.455340    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:27.456010    1308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:57:27.677552    1308 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:57:27.681550    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:27.918123    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:27.897746454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:27.922386    1308 out.go:179] * Using the docker driver based on existing profile
	I1213 08:57:27.925483    1308 start.go:309] selected driver: docker
	I1213 08:57:27.925483    1308 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:27.925483    1308 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:57:27.931484    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:28.158174    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:28.141185883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:28.238865    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:28.238865    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:28.239498    1308 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:28.243527    1308 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:57:28.245818    1308 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:57:28.247303    1308 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:57:28.251374    1308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:57:28.251465    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:28.251634    1308 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:57:28.251673    1308 cache.go:65] Caching tarball of preloaded images
	I1213 08:57:28.251673    1308 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:57:28.251673    1308 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:57:28.251673    1308 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:57:28.331506    1308 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:57:28.331506    1308 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:57:28.331506    1308 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:57:28.331506    1308 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:57:28.331506    1308 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-482100"
	I1213 08:57:28.331506    1308 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:57:28.331506    1308 fix.go:54] fixHost starting: 
	I1213 08:57:28.338850    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:28.394405    1308 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 08:57:28.394453    1308 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:57:28.397828    1308 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 08:57:28.397828    1308 machine.go:94] provisionDockerMachine start ...
	I1213 08:57:28.401414    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.456355    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.457085    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.457134    1308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:57:28.656820    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.656820    1308 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:57:28.660505    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.713653    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.714127    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.714127    1308 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:57:28.912851    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.916558    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.972916    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.973035    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.973035    1308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:57:29.158720    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:29.158720    1308 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:57:29.158720    1308 ubuntu.go:190] setting up certificates
	I1213 08:57:29.158720    1308 provision.go:84] configureAuth start
	I1213 08:57:29.162705    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:29.217525    1308 provision.go:143] copyHostCerts
	I1213 08:57:29.217525    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1213 08:57:29.217525    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:57:29.217525    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:57:29.218193    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:57:29.218931    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1213 08:57:29.219078    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:57:29.219114    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:57:29.219299    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:57:29.220064    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:57:29.220064    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:57:29.220972    1308 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:57:29.312824    1308 provision.go:177] copyRemoteCerts
	I1213 08:57:29.317163    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:57:29.320164    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.370164    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:29.504512    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1213 08:57:29.504655    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:57:29.542721    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1213 08:57:29.542721    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:57:29.574672    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1213 08:57:29.574672    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:57:29.604045    1308 provision.go:87] duration metric: took 445.3221ms to configureAuth
	I1213 08:57:29.604045    1308 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:57:29.605053    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:29.610417    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.666069    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.666532    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.666532    1308 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:57:29.836610    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:57:29.836610    1308 ubuntu.go:71] root file system type: overlay
	I1213 08:57:29.836610    1308 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:57:29.840760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.894590    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.895592    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.895592    1308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:57:30.101134    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:57:30.105760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.161736    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:30.162318    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:30.162318    1308 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:57:30.345094    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:30.345094    1308 machine.go:97] duration metric: took 1.947253s to provisionDockerMachine
	I1213 08:57:30.345094    1308 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:57:30.345094    1308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:57:30.349348    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:57:30.352292    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.407399    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.537367    1308 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:57:30.545885    1308 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_ID="12"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:57:30.545957    1308 command_runner.go:130] > ID=debian
	I1213 08:57:30.545957    1308 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:57:30.545957    1308 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:57:30.545957    1308 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:57:30.546095    1308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:57:30.546117    1308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:57:30.546141    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:57:30.546161    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:57:30.546880    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:57:30.546880    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I1213 08:57:30.547539    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:57:30.547539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I1213 08:57:30.551732    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:57:30.565806    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:57:30.596092    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:57:30.624821    1308 start.go:296] duration metric: took 279.7253ms for postStartSetup
	I1213 08:57:30.629883    1308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:57:30.633087    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.686590    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.807695    1308 command_runner.go:130] > 1%
	I1213 08:57:30.812335    1308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:57:30.820851    1308 command_runner.go:130] > 950G
	I1213 08:57:30.820851    1308 fix.go:56] duration metric: took 2.4893282s for fixHost
	I1213 08:57:30.820851    1308 start.go:83] releasing machines lock for "functional-482100", held for 2.4893282s
	I1213 08:57:30.824237    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:30.876765    1308 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:57:30.881324    1308 ssh_runner.go:195] Run: cat /version.json
	I1213 08:57:30.881371    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.884518    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:31.066730    1308 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1213 08:57:31.066730    1308 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:57:31.066730    1308 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:57:31.071708    1308 ssh_runner.go:195] Run: systemctl --version
	I1213 08:57:31.084553    1308 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:57:31.084640    1308 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:57:31.090087    1308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:57:31.099561    1308 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:57:31.100565    1308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:57:31.105214    1308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:57:31.124077    1308 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:57:31.124077    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.124077    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.124648    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:31.147852    1308 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:57:31.152021    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:57:31.174172    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1213 08:57:31.176576    1308 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:57:31.176576    1308 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:57:31.189695    1308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:57:31.194128    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:57:31.213650    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.232544    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:57:31.252203    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.274175    1308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:57:31.296706    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:57:31.315777    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:57:31.334664    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:57:31.355488    1308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:57:31.369376    1308 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:57:31.373398    1308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:57:31.391830    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:31.608372    1308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:57:31.906123    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.906123    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.911089    1308 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:57:31.932611    1308 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > [Unit]
	I1213 08:57:31.933145    1308 command_runner.go:130] > Description=Docker Application Container Engine
	I1213 08:57:31.933145    1308 command_runner.go:130] > Documentation=https://docs.docker.com
	I1213 08:57:31.933145    1308 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1213 08:57:31.933145    1308 command_runner.go:130] > Wants=network-online.target containerd.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > Requires=docker.socket
	I1213 08:57:31.933145    1308 command_runner.go:130] > StartLimitBurst=3
	I1213 08:57:31.933239    1308 command_runner.go:130] > StartLimitIntervalSec=60
	I1213 08:57:31.933239    1308 command_runner.go:130] > [Service]
	I1213 08:57:31.933239    1308 command_runner.go:130] > Type=notify
	I1213 08:57:31.933239    1308 command_runner.go:130] > Restart=always
	I1213 08:57:31.933239    1308 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1213 08:57:31.933239    1308 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1213 08:57:31.933303    1308 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1213 08:57:31.933336    1308 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1213 08:57:31.933336    1308 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1213 08:57:31.933336    1308 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1213 08:57:31.933336    1308 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1213 08:57:31.933336    1308 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1213 08:57:31.933336    1308 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1213 08:57:31.933415    1308 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1213 08:57:31.933498    1308 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNOFILE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNPROC=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitCORE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1213 08:57:31.933498    1308 command_runner.go:130] > TasksMax=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > TimeoutStartSec=0
	I1213 08:57:31.933572    1308 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1213 08:57:31.933591    1308 command_runner.go:130] > Delegate=yes
	I1213 08:57:31.933591    1308 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1213 08:57:31.933591    1308 command_runner.go:130] > KillMode=process
	I1213 08:57:31.933591    1308 command_runner.go:130] > OOMScoreAdjust=-500
	I1213 08:57:31.933591    1308 command_runner.go:130] > [Install]
	I1213 08:57:31.933591    1308 command_runner.go:130] > WantedBy=multi-user.target
	I1213 08:57:31.938295    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:31.960377    1308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:57:32.049121    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:32.071680    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:57:32.093496    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:32.115103    1308 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1213 08:57:32.119951    1308 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:57:32.126371    1308 command_runner.go:130] > /usr/bin/cri-dockerd
	I1213 08:57:32.130902    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:57:32.144169    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:57:32.170348    1308 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:57:32.320163    1308 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:57:32.454851    1308 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:57:32.454851    1308 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:57:32.483674    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:57:32.505831    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:32.661991    1308 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:57:33.665330    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:57:33.689450    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:57:33.711087    1308 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 08:57:33.739462    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:33.760714    1308 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:57:33.900242    1308 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:57:34.052335    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.188283    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:57:34.213402    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:57:34.237672    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.381154    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:57:34.499581    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:34.518141    1308 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:57:34.522686    1308 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:57:34.529494    1308 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Modify: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Change: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] >  Birth: -
	I1213 08:57:34.529494    1308 start.go:564] Will wait 60s for crictl version
	I1213 08:57:34.534224    1308 ssh_runner.go:195] Run: which crictl
	I1213 08:57:34.541202    1308 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:57:34.545269    1308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:57:34.587655    1308 command_runner.go:130] > Version:  0.1.0
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeName:  docker
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:57:34.587655    1308 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:57:34.590292    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.627699    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.631112    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.669555    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.677969    1308 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:57:34.681392    1308 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:57:34.898094    1308 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:57:34.902419    1308 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:57:34.910595    1308 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1213 08:57:34.914565    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:34.972832    1308 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:57:34.972832    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:34.977045    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.008739    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.010249    1308 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:57:35.013678    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.043903    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.044022    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.044104    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.044104    1308 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:57:35.044160    1308 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:57:35.044312    1308 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:57:35.047625    1308 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:57:35.491294    1308 command_runner.go:130] > cgroupfs
	I1213 08:57:35.491294    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:35.491294    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:35.491294    1308 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:57:35.491294    1308 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:57:35.491294    1308 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:57:35.495479    1308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubeadm
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubectl
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubelet
	I1213 08:57:35.511680    1308 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:57:35.515943    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:57:35.527808    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:57:35.545969    1308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:57:35.565749    1308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:57:35.590269    1308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:57:35.598806    1308 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:57:35.603098    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:35.752426    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:35.771354    1308 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:57:35.771354    1308 certs.go:195] generating shared ca certs ...
	I1213 08:57:35.771354    1308 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:35.771354    1308 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:57:35.772397    1308 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:57:35.772549    1308 certs.go:257] generating profile certs ...
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:57:35.773396    1308 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:57:35.773447    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:57:35.773539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:57:35.773616    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:57:35.773761    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:57:35.773831    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:57:35.773939    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:57:35.773999    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:57:35.774105    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:57:35.774559    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:57:35.774827    1308 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:57:35.774870    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:57:35.775696    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I1213 08:57:35.775842    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:57:35.807179    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:57:35.833688    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:57:35.863566    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:57:35.894920    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:57:35.921314    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:57:35.946004    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:57:35.973030    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:57:36.001405    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:57:36.027495    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:57:36.053673    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:57:36.083163    1308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:57:36.106205    1308 ssh_runner.go:195] Run: openssl version
	I1213 08:57:36.124518    1308 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:57:36.128653    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.148109    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:57:36.170644    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.184506    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.230303    1308 command_runner.go:130] > 51391683
	I1213 08:57:36.235418    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:57:36.252420    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.271009    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:57:36.291738    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.306035    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.348842    1308 command_runner.go:130] > 3ec20f2e
	I1213 08:57:36.353574    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:57:36.371994    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.390417    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:57:36.409132    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.417987    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.418020    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.422336    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.464222    1308 command_runner.go:130] > b5213941
	I1213 08:57:36.469763    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:57:36.486907    1308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:57:36.493430    1308 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: 2025-12-13 08:53:22.558756963 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Modify: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Change: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] >  Birth: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.498322    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:57:36.542775    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.547618    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:57:36.590488    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.594826    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:57:36.640226    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.644848    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:57:36.698932    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.703709    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:57:36.746225    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.751252    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:57:36.796246    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.796605    1308 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:36.800619    1308 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:57:36.835511    1308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:57:36.848084    1308 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:57:36.848084    1308 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:57:36.853050    1308 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:57:36.866011    1308 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:57:36.869675    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:36.923417    1308 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.923684    1308 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-482100" cluster setting kubeconfig missing "functional-482100" context setting]
	I1213 08:57:36.923684    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.940090    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.940688    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:36.941864    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:57:36.946352    1308 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:57:36.960987    1308 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 08:57:36.961998    1308 kubeadm.go:602] duration metric: took 113.913ms to restartPrimaryControlPlane
	I1213 08:57:36.961998    1308 kubeadm.go:403] duration metric: took 165.4668ms to StartCluster
	I1213 08:57:36.961998    1308 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.961998    1308 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.963076    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.963883    1308 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:57:36.963883    1308 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:57:36.963883    1308 addons.go:70] Setting default-storageclass=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 addons.go:70] Setting storage-provisioner=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:36.963883    1308 addons.go:239] Setting addon storage-provisioner=true in "functional-482100"
	I1213 08:57:36.963883    1308 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-482100"
	I1213 08:57:36.964406    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:36.966968    1308 out.go:179] * Verifying Kubernetes components...
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.974067    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:37.028122    1308 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:37.032121    1308 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.032121    1308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:57:37.035128    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.050133    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:37.050133    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:37.051141    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:37.051141    1308 addons.go:239] Setting addon default-storageclass=true in "functional-482100"
	I1213 08:57:37.051141    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:37.059130    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:37.090124    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.112122    1308 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.112122    1308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:57:37.115122    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.124126    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:37.163123    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.218965    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.244846    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.292847    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.297857    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.298846    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 retry.go:31] will retry after 278.997974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 node_ready.go:35] waiting up to 6m0s for node "functional-482100" to be "Ready" ...
	I1213 08:57:37.298846    1308 type.go:168] "Request Body" body=""
	I1213 08:57:37.298846    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:37.300855    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:37.389624    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.394960    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.394960    1308 retry.go:31] will retry after 212.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.583432    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.612508    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.662694    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.668089    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.668089    1308 retry.go:31] will retry after 421.785382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.691227    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.696684    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.696684    1308 retry.go:31] will retry after 387.963958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.090409    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.094708    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.167644    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.172931    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.172931    1308 retry.go:31] will retry after 654.783355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.174195    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.178117    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.178179    1308 retry.go:31] will retry after 288.314182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.301152    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:38.301683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:38.304388    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:38.472962    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.544996    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.548547    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.548623    1308 retry.go:31] will retry after 1.098701937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.833272    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.912142    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.912142    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.912142    1308 retry.go:31] will retry after 808.399476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.305249    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:39.305249    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:39.308473    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:39.652260    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:39.721531    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726229    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 08:57:39.726899    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726899    1308 retry.go:31] will retry after 1.580407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.799856    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:39.802238    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.802238    1308 retry.go:31] will retry after 1.163449845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:40.308791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:40.308791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:40.310792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:40.971107    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:41.051235    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.056481    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.056595    1308 retry.go:31] will retry after 2.292483012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.312219    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:41.312219    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:41.313763    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:41.315446    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:41.385280    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.389328    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.389328    1308 retry.go:31] will retry after 2.10655749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:42.316064    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:42.316469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:42.319430    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.319659    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:43.319659    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:43.322154    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.354119    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:43.424936    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.428566    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.428566    1308 retry.go:31] will retry after 2.451441131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.500768    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:43.577861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.581800    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.581870    1308 retry.go:31] will retry after 1.842575818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:44.322393    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:44.322393    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:44.326064    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.326352    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:45.326352    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:45.329823    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.430441    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:45.504084    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.509721    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.509813    1308 retry.go:31] will retry after 3.320490506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.885819    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:45.962560    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.966882    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.966882    1308 retry.go:31] will retry after 5.131341184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:46.330362    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:46.330362    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:46.333170    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:47.333778    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:47.333778    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.337260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:57:47.337260    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:47.337260    1308 type.go:168] "Request Body" body=""
	I1213 08:57:47.337260    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.340404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.340937    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:48.340937    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:48.344443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.835623    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:48.914169    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:48.918486    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:48.918486    1308 retry.go:31] will retry after 6.605490232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:49.345162    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:49.345162    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:49.347526    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:50.348478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:50.348478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:50.351813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:51.103982    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:51.174396    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:51.177073    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.177136    1308 retry.go:31] will retry after 4.217545245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.352019    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:51.352363    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:51.354826    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:52.355908    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:52.355908    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:52.358993    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:53.359347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:53.359730    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:53.362425    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:54.363245    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:54.363536    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:54.366267    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:55.367715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:55.367715    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:55.371143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:55.400351    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:55.476385    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.480063    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.480122    1308 retry.go:31] will retry after 11.422205159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.528824    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:55.599872    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.604580    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.604626    1308 retry.go:31] will retry after 13.338795854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:56.371517    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:56.371517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:56.375228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:57.375899    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:57.375899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.378899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:57:57.379427    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:57.379613    1308 type.go:168] "Request Body" body=""
	I1213 08:57:57.379640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.381380    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 08:57:58.382025    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:58.382025    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:58.385451    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:59.385982    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:59.386304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:59.388570    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:00.389156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:00.389156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:00.393493    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:01.394059    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:01.394059    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:01.397148    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:02.397228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:02.397593    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:02.400363    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:03.400715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:03.401100    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:03.403595    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:04.404146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:04.404146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:04.407029    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:05.407299    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:05.407299    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:05.409705    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:06.410552    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:06.410552    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:06.413575    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:06.907694    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:06.989453    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:06.993505    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:06.993505    1308 retry.go:31] will retry after 9.12046724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:07.413861    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:07.413861    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.423766    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	W1213 08:58:07.423766    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:07.423766    1308 type.go:168] "Request Body" body=""
	I1213 08:58:07.423766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.426420    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.426748    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:08.426748    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:08.429523    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.949269    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:09.021443    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:09.021574    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.021574    1308 retry.go:31] will retry after 18.212645226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.429654    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:09.429654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:09.434475    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:10.434763    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:10.434763    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:10.438337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:11.438992    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:11.438992    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:11.442157    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:12.442370    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:12.442370    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:12.445441    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:13.446557    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:13.446557    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:13.449579    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:14.449909    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:14.449909    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:14.453875    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:15.453999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:15.454347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:15.457109    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:16.119722    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:16.199861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:16.203796    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.203841    1308 retry.go:31] will retry after 32.127892546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.457492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:16.457492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:16.460671    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:17.461098    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:17.461098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.464303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:17.464392    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:17.464557    1308 type.go:168] "Request Body" body=""
	I1213 08:58:17.464596    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.466792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:18.467178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:18.467178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:18.471411    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:19.472813    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:19.472813    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:19.475365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:20.475825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:20.475825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:20.478756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:21.479284    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:21.479284    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:21.482725    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:22.483047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:22.483047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:22.486928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:23.487680    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:23.487680    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:23.491133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:24.491850    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:24.492121    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:24.495131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:25.495436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:25.495893    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:25.498242    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:26.498882    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:26.498882    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:26.501986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:27.239685    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:27.315134    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:27.318446    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.318446    1308 retry.go:31] will retry after 22.292291086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.502907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:27.502907    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.505700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:27.505700    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:27.505700    1308 type.go:168] "Request Body" body=""
	I1213 08:58:27.505700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.508521    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:28.509510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:28.509510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:28.512707    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:29.513169    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:29.513169    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:29.516081    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:30.517601    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:30.517601    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:30.520368    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:31.520700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:31.521119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:31.524120    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:32.524848    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:32.524848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:32.528137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:33.529023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:33.529412    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:33.532996    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:34.533392    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:34.533697    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:34.536406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:35.536910    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:35.536910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:35.539801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:36.540290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:36.540290    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:36.543462    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:37.544092    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:37.544398    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.547080    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:37.547165    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:37.547240    1308 type.go:168] "Request Body" body=""
	I1213 08:58:37.547322    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.549686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:38.550568    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:38.550568    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:38.554061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:39.554545    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:39.554545    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:39.556910    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:40.557343    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:40.557343    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:40.562456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 08:58:41.563271    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:41.563271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:41.566401    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:42.566676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:42.566676    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:42.569495    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:43.570436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:43.570436    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:43.573856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:44.574034    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:44.574034    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:44.576971    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:45.577736    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:45.577736    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:45.580563    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:46.580998    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:46.580998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:46.584404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:47.585574    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:47.585574    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.589116    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:47.589116    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:47.589285    1308 type.go:168] "Request Body" body=""
	I1213 08:58:47.589330    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.591421    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:48.337063    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:48.419155    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:48.419236    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.419312    1308 retry.go:31] will retry after 42.344315794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.592137    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:48.592503    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:48.594564    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.594849    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:49.594849    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:49.598177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.616306    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:49.690748    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:49.696226    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:49.696226    1308 retry.go:31] will retry after 43.889805704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:50.598940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:50.598940    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:50.602650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:51.602781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:51.602781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:51.606654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:52.607136    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:52.607136    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:52.610410    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:53.610695    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:53.611291    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:53.614086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:54.614262    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:54.614262    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:54.617596    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:55.618389    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:55.618389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:55.621130    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:56.621484    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:56.621936    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:56.626456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:57.626653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:57.626653    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.630131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:57.630131    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:57.630323    1308 type.go:168] "Request Body" body=""
	I1213 08:58:57.630411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.632861    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:58.633441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:58.634089    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:58.637246    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:59.637793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:59.638147    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:59.641409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:00.641531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:00.641871    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:00.644335    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:01.644762    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:01.644762    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:01.647872    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:02.648069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:02.648069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:02.651180    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:03.651302    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:03.651302    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:03.654332    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:04.654665    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:04.654665    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:04.657952    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:05.658178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:05.658178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:05.662672    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:06.663347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:06.663347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:06.666728    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:07.667532    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:07.667885    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.670688    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:07.670852    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:07.670996    1308 type.go:168] "Request Body" body=""
	I1213 08:59:07.671070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.675143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:08.675540    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:08.675540    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:08.679392    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:09.679704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:09.679704    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:09.683514    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:10.683721    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:10.683721    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:10.686924    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:11.687492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:11.687492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:11.691432    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:12.692349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:12.692349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:12.695226    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:13.696218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:13.696218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:13.699830    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:14.700112    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:14.700547    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:14.704305    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:15.704907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:15.705360    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:15.708341    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:16.709464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:16.709464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:16.712813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:17.713633    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:17.713633    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.716674    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:17.716674    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:17.717197    1308 type.go:168] "Request Body" body=""
	I1213 08:59:17.717271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.719337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:18.719797    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:18.720188    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:18.722856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:19.723497    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:19.723497    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:19.726804    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:20.727069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:20.727069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:20.730372    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:21.730641    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:21.731118    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:21.733699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:22.734052    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:22.734386    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:22.736600    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:23.737452    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:23.737452    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:23.741063    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:24.741313    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:24.741698    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:24.744012    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:25.745187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:25.745474    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:25.747751    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:26.748382    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:26.748382    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:26.751104    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:27.752196    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:27.752196    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.755077    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:27.755077    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:27.755077    1308 type.go:168] "Request Body" body=""
	I1213 08:59:27.755077    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.757683    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:28.757960    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:28.757960    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:28.760904    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:29.761133    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:29.761133    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:29.763899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:30.764845    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:30.764845    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:30.768756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:30.770278    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:59:31.058703    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:31.769017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:31.769017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:31.771943    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:32.772681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:32.772681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:32.775498    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:33.593527    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:59:33.670412    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:33.680151    1308 out.go:179] * Enabled addons: 
	I1213 08:59:33.683381    1308 addons.go:530] duration metric: took 1m56.7187029s for enable addons: enabled=[]
	I1213 08:59:33.775980    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:33.775980    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:33.778203    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:34.778690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:34.778690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:34.781770    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:35.781942    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:35.781942    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:35.784945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:36.785401    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:36.785401    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:36.788220    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:37.789140    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:37.789517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.792181    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:37.792181    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:37.792181    1308 type.go:168] "Request Body" body=""
	I1213 08:59:37.792181    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.795747    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:38.796718    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:38.797057    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:38.799771    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:39.801087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:39.801087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:39.804280    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:40.804974    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:40.804974    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:40.809756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:41.810478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:41.810478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:41.813284    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:42.814032    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:42.814032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:42.816801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:43.817275    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:43.817275    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:43.820177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:44.821316    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:44.821679    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:44.824610    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:45.825090    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:45.825090    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:45.828480    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:46.828785    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:46.828785    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:46.832994    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:47.833561    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:47.833561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.837174    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:47.837252    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:47.837252    1308 type.go:168] "Request Body" body=""
	I1213 08:59:47.837252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.840387    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:48.841463    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:48.841463    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:48.844528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:49.844825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:49.844825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:49.847708    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:50.848539    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:50.848539    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:50.851697    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:51.852049    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:51.852049    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:51.855078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:52.855723    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:52.855723    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:52.859507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:53.860458    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:53.860752    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:53.863700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:54.864320    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:54.864320    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:54.867415    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:55.868074    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:55.868074    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:55.871228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:56.871814    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:56.871814    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:56.874839    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:57.875317    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:57.875317    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.878442    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:57.878519    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:57.878660    1308 type.go:168] "Request Body" body=""
	I1213 08:59:57.878732    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.881312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:58.881649    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:58.881649    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:58.885078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:59.885307    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:59.885744    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:59.889776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:00.889947    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:00.889947    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:00.893124    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:01.893793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:01.893793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:01.897038    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:02.898350    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:02.898350    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:02.901920    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:03.902381    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:03.902770    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:03.905459    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:04.905970    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:04.905970    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:04.909229    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:05.909755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:05.909755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:05.912940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:06.913579    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:06.913910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:06.916237    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:07.917204    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:07.917204    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.920022    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:07.920154    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:07.920228    1308 type.go:168] "Request Body" body=""
	I1213 09:00:07.920228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.922914    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:08.923390    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:08.923390    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:08.927291    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:09.927446    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:09.927446    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:09.930102    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:10.931065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:10.931065    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:10.933557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:11.934252    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:11.934252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:11.937814    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:12.938281    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:12.938281    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:12.941107    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:13.941334    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:13.941334    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:13.944744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:14.945212    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:14.945453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:14.948167    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:15.948418    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:15.948418    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:15.951926    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:16.952429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:16.952429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:16.955133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:17.955919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:17.956313    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.960472    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:00:17.960625    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:17.960654    1308 type.go:168] "Request Body" body=""
	I1213 09:00:17.960654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.962719    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:18.964051    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:18.964051    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:18.966984    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:19.967156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:19.967156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:19.970136    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:20.970477    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:20.970477    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:20.973611    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:21.974523    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:21.974905    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:21.977137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:22.977950    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:22.978189    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:22.981086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:23.982033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:23.982033    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:23.984606    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:24.985513    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:24.985513    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:24.988324    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:25.988590    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:25.988590    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:25.991603    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:26.992676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:26.992930    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:26.994776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:27.995464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:27.995464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:27.998145    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:27.998665    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:27.998851    1308 type.go:168] "Request Body" body=""
	I1213 09:00:27.998921    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:28.001309    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:29.001457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:29.001457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:29.004871    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:30.005255    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:30.005617    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:30.008184    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:31.008410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:31.008410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:31.011873    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:32.012490    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:32.012848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:32.015810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:33.016170    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:33.016170    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:33.017538    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:34.019586    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:34.019586    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:34.022235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:35.022955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:35.022955    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:35.026485    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:36.027689    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:36.027689    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:36.030650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:37.031027    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:37.031027    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:37.034168    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:38.034637    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:38.034637    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.038073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:38.038187    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:38.038337    1308 type.go:168] "Request Body" body=""
	I1213 09:00:38.038396    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.040656    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:39.041187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:39.041187    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:39.044199    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:40.044653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:40.045022    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:40.048245    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:41.048510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:41.048510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:41.052268    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:42.053226    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:42.053226    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:42.056222    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:43.056546    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:43.056546    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:43.059398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:44.059625    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:44.059625    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:44.062923    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:45.063384    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:45.063384    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:45.066631    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:46.067306    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:46.067306    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:46.070443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:47.070777    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:47.070777    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:47.073795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:48.074558    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:48.074558    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.077853    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:48.077917    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:48.078016    1308 type.go:168] "Request Body" body=""
	I1213 09:00:48.078098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.080934    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:49.082070    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:49.082070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:49.084982    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:50.085640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:50.085640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:50.088925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:51.089700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:51.089700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:51.092744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:52.093791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:52.093791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:52.096573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:53.097781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:53.097781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:53.100957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:54.101759    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:54.101759    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:54.104615    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:55.105494    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:55.105919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:55.109444    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:56.110146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:56.110146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:56.114930    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:00:57.115147    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:57.115467    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:57.118438    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:58.119483    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:58.119483    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.122648    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:58.122648    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:58.122648    1308 type.go:168] "Request Body" body=""
	I1213 09:00:58.123185    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.125195    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:59.125875    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:59.125875    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:59.129393    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:00.129668    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:00.129668    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:00.132627    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:01.133033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:01.133525    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:01.136658    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:02.137163    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:02.137163    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:02.140403    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:03.140588    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:03.140588    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:03.143578    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:04.144312    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:04.144312    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:04.147391    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:05.148065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:05.148453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:05.152235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:06.152555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:06.152555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:06.155862    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:07.156337    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:07.156337    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:07.159561    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:08.160007    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:08.160007    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.163399    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:01:08.163399    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:08.163399    1308 type.go:168] "Request Body" body=""
	I1213 09:01:08.163399    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.165301    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:09.166036    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:09.166036    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:09.169312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:10.170153    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:10.170153    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:10.173337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:11.173766    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:11.173766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:11.176583    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:12.177289    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:12.177289    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:12.180992    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:13.181441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:13.181441    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:13.183966    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:14.185028    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:14.185028    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:14.189060    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:15.189819    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:15.190274    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:15.193013    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:16.193531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:16.193531    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:16.196639    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:17.197877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:17.197877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:17.201511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:18.201776    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:18.201776    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.204748    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:18.204825    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:18.204913    1308 type.go:168] "Request Body" body=""
	I1213 09:01:18.204983    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.206713    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:19.207179    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:19.207179    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:19.210389    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:20.210678    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:20.210678    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:20.213343    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:21.213955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:21.214383    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:21.217244    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:22.217764    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:22.217764    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:22.221016    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:23.221538    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:23.222082    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:23.225141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:24.225563    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:24.225563    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:24.228842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:25.229501    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:25.229896    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:25.232481    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:26.232855    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:26.232855    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:26.235225    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:27.235999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:27.235999    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:27.239007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:28.239290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:28.239796    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.242163    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:28.242163    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:28.242754    1308 type.go:168] "Request Body" body=""
	I1213 09:01:28.242754    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.245406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:29.246227    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:29.246227    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:29.249049    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:30.249528    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:30.249528    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:30.252945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:31.253720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:31.253720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:31.257007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:32.257727    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:32.257727    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:32.260807    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:33.261355    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:33.261355    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:33.264412    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:34.265479    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:34.265479    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:34.268382    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:35.269039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:35.269258    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:35.271838    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:36.272075    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:36.272075    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:36.275197    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:37.275934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:37.275934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:37.280528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:38.281387    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:38.281707    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.284450    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:38.284566    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:38.284566    1308 type.go:168] "Request Body" body=""
	I1213 09:01:38.284566    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.287277    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:39.287457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:39.287457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:39.290889    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:40.291630    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:40.291630    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:40.295337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:41.295926    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:41.296353    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:41.299053    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:42.300178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:42.300178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:42.303160    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:43.304403    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:43.305041    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:43.309194    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:44.310087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:44.310087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:44.312799    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:45.313738    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:45.313738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:45.317911    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:46.319411    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:46.319411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:46.323036    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:47.323495    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:47.323495    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:47.326782    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:48.327222    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:48.327222    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.331951    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:01:48.331951    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:48.331951    1308 type.go:168] "Request Body" body=""
	I1213 09:01:48.331951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.336553    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:49.337686    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:49.337686    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:49.340983    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:50.342115    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:50.342115    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:50.344717    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:51.345242    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:51.345242    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:51.347895    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:52.348829    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:52.348829    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:52.353265    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:53.353621    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:53.353621    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:53.356851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:54.357643    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:54.357643    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:54.360716    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:55.361583    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:55.361583    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:55.364202    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:56.364951    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:56.364951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:56.368507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:57.368791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:57.368791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:57.373234    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:58.373801    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:58.373801    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.376426    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:58.376426    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:58.376426    1308 type.go:168] "Request Body" body=""
	I1213 09:01:58.377111    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.379740    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:59.379930    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:59.380415    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:59.383047    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:00.384221    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:00.384221    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:00.387516    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:01.388029    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:01.388029    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:01.392383    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:02.392602    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:02.392956    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:02.396482    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:03.397017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:03.397017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:03.400427    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:04.400756    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:04.400756    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:04.404303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:05.404720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:05.404720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:05.408936    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:06.409154    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:06.409154    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:06.412227    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:07.412599    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:07.412599    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:07.415247    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:08.415920    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:08.415920    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.419260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:08.419342    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:08.419400    1308 type.go:168] "Request Body" body=""
	I1213 09:02:08.419400    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.421925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:09.422119    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:09.422119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:09.424626    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:10.426518    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:10.426518    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:10.430645    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:11.431039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:11.431039    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:11.434110    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:12.434291    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:12.434618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:12.437021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:13.437858    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:13.437858    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:13.440822    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:14.441345    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:14.441345    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:14.444544    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:15.444691    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:15.444691    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:15.447957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:16.448990    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:16.448990    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:16.452282    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:17.452755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:17.452755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:17.456404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:18.456603    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:18.456603    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.459851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:18.459890    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:18.460020    1308 type.go:168] "Request Body" body=""
	I1213 09:02:18.460056    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.462654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:19.463750    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:19.463750    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:19.468416    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:20.469429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:20.469429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:20.472907    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:21.473388    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:21.473388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:21.476318    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:22.477206    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:22.477206    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:22.480424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:23.481047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:23.481047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:23.484298    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:24.484704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:24.485032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:24.487973    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:25.488079    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:25.488079    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:25.490531    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:26.491438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:26.491657    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:26.493750    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:27.494247    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:27.494247    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:27.497209    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:28.497919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:28.497919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.500562    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:28.500562    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:28.500562    1308 type.go:168] "Request Body" body=""
	I1213 09:02:28.500562    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.503165    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:29.504103    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:29.504388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:29.507059    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:30.507476    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:30.507476    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:30.510269    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:31.510555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:31.510555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:31.513340    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:32.513618    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:32.513618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:32.517141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:33.518021    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:33.518021    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:33.520581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:34.521438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:34.521438    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:34.524010    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:35.524429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:35.524429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:35.527863    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:36.528126    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:36.528517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:36.531666    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:37.532410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:37.532410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:37.534749    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:38.535244    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:38.535670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.538511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:38.538511    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:38.538511    1308 type.go:168] "Request Body" body=""
	I1213 09:02:38.538511    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.541809    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:39.542847    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:39.542998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:39.545686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:40.546120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:40.546120    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:40.548869    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:41.549367    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:41.549743    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:41.551917    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:42.552928    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:42.552928    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:42.555928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:43.556215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:43.556215    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:43.563039    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1213 09:02:44.563600    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:44.563600    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:44.566791    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:45.568004    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:45.568004    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:45.570729    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:46.570877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:46.570877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:46.573554    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:47.574011    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:47.574011    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:47.576970    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:48.577457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:48.577800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.582090    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:02:48.582143    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:48.582262    1308 type.go:168] "Request Body" body=""
	I1213 09:02:48.582389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.586235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:49.586627    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:49.586627    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:49.589468    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:50.589681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:50.589681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:50.592409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:51.593243    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:51.593243    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:51.596241    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:52.596400    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:52.596738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:52.599767    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:53.600526    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:53.600526    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:53.603709    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:54.604023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:54.604023    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:54.607315    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:55.607800    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:55.607800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:55.609797    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:56.610964    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:56.610964    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:56.613665    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:57.615191    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:57.615191    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:57.618842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:58.619640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:58.619640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.622361    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:58.622361    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:58.622361    1308 type.go:168] "Request Body" body=""
	I1213 09:02:58.622899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.625164    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:59.625440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:59.625440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:59.628095    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:00.628841    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:00.628841    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:00.632573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:01.632870    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:01.632870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:01.636028    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:02.636954    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:02.636954    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:02.640488    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:03.640838    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:03.640838    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:03.643811    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:04.644591    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:04.644591    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:04.647706    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:05.648327    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:05.648327    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:05.651557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:06.651787    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:06.651787    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:06.655775    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:07.656509    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:07.656509    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:07.659073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:08.659268    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:08.659268    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.662810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:03:08.662901    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:08.663011    1308 type.go:168] "Request Body" body=""
	I1213 09:03:08.663085    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.664787    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:03:09.665751    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:09.665872    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:09.668793    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:10.669211    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:10.669211    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:10.671961    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:11.672215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:11.672561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:11.675173    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:12.675670    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:12.675670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:12.679515    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:13.679795    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:13.679795    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:13.682609    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:14.682918    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:14.682918    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:14.685424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:15.686129    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:15.686129    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:15.690757    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:03:16.690957    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:16.690957    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:16.693958    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:17.694690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:17.694690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:17.697021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:18.697793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:18.697793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.700581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:18.701115    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:18.701215    1308 type.go:168] "Request Body" body=""
	I1213 09:03:18.701304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.703652    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:19.703940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:19.704228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:19.706359    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:20.707349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:20.707349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:20.710940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:21.711541    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:21.711541    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:21.714696    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:22.715218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:22.715218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:22.718795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:23.719440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:23.719440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:23.722635    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:24.723237    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:24.723237    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:24.726985    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:25.727683    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:25.727683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:25.730456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:26.730527    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:26.730971    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:26.733842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:27.735120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:27.735493    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:27.737796    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:28.738333    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:28.738714    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.741699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:28.741800    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:28.741870    1308 type.go:168] "Request Body" body=""
	I1213 09:03:28.741870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.744398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:29.744620    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:29.744620    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:29.747986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:30.748934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:30.748934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:30.751365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:31.752294    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:31.752294    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:31.755860    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:32.756228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:32.756228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:32.758997    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:33.759818    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:33.759818    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:33.762321    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:34.763690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:34.763690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:34.770061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 09:03:35.770469    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:35.770469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:35.773118    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:36.773842    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:36.774178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:36.778885    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:03:37.302575    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 09:03:37.302575    1308 node_ready.go:38] duration metric: took 6m0.0011646s for node "functional-482100" to be "Ready" ...
	I1213 09:03:37.305847    1308 out.go:203] 
	W1213 09:03:37.307851    1308 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 09:03:37.307851    1308 out.go:285] * 
	W1213 09:03:37.311623    1308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:03:37.314310    1308 out.go:203] 
	
	
	==> Docker <==
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525747623Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525754023Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525775925Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525849730Z" level=info msg="Initializing buildkit"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.646190196Z" level=info msg="Completed buildkit initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655073529Z" level=info msg="Daemon has completed initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655186237Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655229540Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655448956Z" level=info msg="API listen on [::]:2376"
	Dec 13 08:57:33 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:33 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 08:57:34 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Loaded network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 08:57:34 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:05:47.487469   20111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:05:47.489096   20111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:05:47.490235   20111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:05:47.491274   20111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:05:47.491956   20111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000739] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000891] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001020] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001158] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001174] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 08:57] CPU: 3 PID: 54870 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000964] RIP: 0033:0x7f5dc4ba4b20
	[  +0.000410] Code: Unable to access opcode bytes at RIP 0x7f5dc4ba4af6.
	[  +0.000689] RSP: 002b:00007ffdbe9599f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000875] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001112] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001539] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001199] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001222] FS:  0000000000000000 GS:  0000000000000000
	[  +0.961990] CPU: 3 PID: 54996 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000796] RIP: 0033:0x7f46e6061b20
	[  +0.000388] Code: Unable to access opcode bytes at RIP 0x7f46e6061af6.
	[  +0.000654] RSP: 002b:00007ffd6f1408e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000776] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:05:47 up 41 min,  0 user,  load average: 0.32, 0.36, 0.56
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:05:44 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 988.
	Dec 13 09:05:45 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:45 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:45 functional-482100 kubelet[19957]: E1213 09:05:45.288921   19957 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 989.
	Dec 13 09:05:45 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:45 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:45 functional-482100 kubelet[19982]: E1213 09:05:45.995949   19982 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:05:45 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:05:46 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 990.
	Dec 13 09:05:46 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:46 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:46 functional-482100 kubelet[20011]: E1213 09:05:46.785851   20011 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:05:46 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:05:46 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:05:47 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 991.
	Dec 13 09:05:47 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:47 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:05:47 functional-482100 kubelet[20120]: E1213 09:05:47.546782   20120 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:05:47 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:05:47 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (600.313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (53.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-482100 get pods
functional_test.go:756: (dbg) Non-zero exit: out\kubectl.exe --context functional-482100 get pods: exit status 1 (50.5341476s)

                                                
                                                
** stderr ** 
	E1213 09:05:59.253769    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:06:09.350017    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:06:19.393352    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:06:29.440993    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:06:39.484333    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out\\kubectl.exe --context functional-482100 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (618.9952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.1683523s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-213400 image ls --format short --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh     │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image   │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete  │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start   │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start   │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.1                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.3                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:latest                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add minikube-local-cache-test:functional-482100                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache delete minikube-local-cache-test:functional-482100                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl images                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ cache   │ functional-482100 cache reload                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ kubectl │ functional-482100 kubectl -- --context functional-482100 get pods                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:57:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:57:27.379293    1308 out.go:360] Setting OutFile to fd 1960 ...
	I1213 08:57:27.421775    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.421775    1308 out.go:374] Setting ErrFile to fd 2020...
	I1213 08:57:27.421858    1308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:27.434678    1308 out.go:368] Setting JSON to false
	I1213 08:57:27.436793    1308 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2054,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:57:27.436793    1308 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:57:27.440227    1308 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:57:27.444177    1308 notify.go:221] Checking for updates...
	I1213 08:57:27.444177    1308 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:27.446958    1308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:57:27.448893    1308 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:57:27.451179    1308 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:57:27.453000    1308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:57:27.455340    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:27.456010    1308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:57:27.677552    1308 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:57:27.681550    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:27.918123    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:27.897746454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:27.922386    1308 out.go:179] * Using the docker driver based on existing profile
	I1213 08:57:27.925483    1308 start.go:309] selected driver: docker
	I1213 08:57:27.925483    1308 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:27.925483    1308 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:57:27.931484    1308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:57:28.158174    1308 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 08:57:28.141185883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:57:28.238865    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:28.238865    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:28.239498    1308 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:28.243527    1308 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 08:57:28.245818    1308 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:57:28.247303    1308 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:57:28.251374    1308 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:57:28.251465    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:28.251634    1308 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:57:28.251673    1308 cache.go:65] Caching tarball of preloaded images
	I1213 08:57:28.251673    1308 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 08:57:28.251673    1308 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 08:57:28.251673    1308 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 08:57:28.331506    1308 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:57:28.331506    1308 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:57:28.331506    1308 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:57:28.331506    1308 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:57:28.331506    1308 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-482100"
	I1213 08:57:28.331506    1308 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:57:28.331506    1308 fix.go:54] fixHost starting: 
	I1213 08:57:28.338850    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:28.394405    1308 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 08:57:28.394453    1308 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:57:28.397828    1308 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 08:57:28.397828    1308 machine.go:94] provisionDockerMachine start ...
	I1213 08:57:28.401414    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.456355    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.457085    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.457134    1308 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:57:28.656820    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.656820    1308 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 08:57:28.660505    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.713653    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.714127    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.714127    1308 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 08:57:28.912851    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 08:57:28.916558    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:28.972916    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:28.973035    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:28.973035    1308 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:57:29.158720    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:29.158720    1308 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 08:57:29.158720    1308 ubuntu.go:190] setting up certificates
	I1213 08:57:29.158720    1308 provision.go:84] configureAuth start
	I1213 08:57:29.162705    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:29.217525    1308 provision.go:143] copyHostCerts
	I1213 08:57:29.217525    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1213 08:57:29.217525    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 08:57:29.217525    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 08:57:29.218193    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 08:57:29.218931    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1213 08:57:29.219078    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 08:57:29.219114    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 08:57:29.219299    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 08:57:29.220064    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 08:57:29.220064    1308 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 08:57:29.220064    1308 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 08:57:29.220972    1308 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 08:57:29.312824    1308 provision.go:177] copyRemoteCerts
	I1213 08:57:29.317163    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:57:29.320164    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.370164    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:29.504512    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1213 08:57:29.504655    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:57:29.542721    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1213 08:57:29.542721    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:57:29.574672    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1213 08:57:29.574672    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:57:29.604045    1308 provision.go:87] duration metric: took 445.3221ms to configureAuth
	I1213 08:57:29.604045    1308 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:57:29.605053    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:29.610417    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.666069    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.666532    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.666532    1308 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 08:57:29.836610    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 08:57:29.836610    1308 ubuntu.go:71] root file system type: overlay
	I1213 08:57:29.836610    1308 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 08:57:29.840760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:29.894590    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:29.895592    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:29.895592    1308 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 08:57:30.101134    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 08:57:30.105760    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.161736    1308 main.go:143] libmachine: Using SSH client type: native
	I1213 08:57:30.162318    1308 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff7429cfd00] 0x7ff7429d2860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 08:57:30.162318    1308 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 08:57:30.345094    1308 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:57:30.345094    1308 machine.go:97] duration metric: took 1.947253s to provisionDockerMachine
	I1213 08:57:30.345094    1308 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 08:57:30.345094    1308 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:57:30.349348    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:57:30.352292    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.407399    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.537367    1308 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:57:30.545885    1308 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_ID="12"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:57:30.545957    1308 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:57:30.545957    1308 command_runner.go:130] > ID=debian
	I1213 08:57:30.545957    1308 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:57:30.545957    1308 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:57:30.545957    1308 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:57:30.546095    1308 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:57:30.546117    1308 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:57:30.546141    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 08:57:30.546161    1308 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 08:57:30.546880    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 08:57:30.546880    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I1213 08:57:30.547539    1308 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 08:57:30.547539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I1213 08:57:30.551732    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 08:57:30.565806    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 08:57:30.596092    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 08:57:30.624821    1308 start.go:296] duration metric: took 279.7253ms for postStartSetup
	I1213 08:57:30.629883    1308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:57:30.633087    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.686590    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.807695    1308 command_runner.go:130] > 1%
	I1213 08:57:30.812335    1308 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:57:30.820851    1308 command_runner.go:130] > 950G
	I1213 08:57:30.820851    1308 fix.go:56] duration metric: took 2.4893282s for fixHost
	I1213 08:57:30.820851    1308 start.go:83] releasing machines lock for "functional-482100", held for 2.4893282s
	I1213 08:57:30.824237    1308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 08:57:30.876765    1308 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 08:57:30.881324    1308 ssh_runner.go:195] Run: cat /version.json
	I1213 08:57:30.881371    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.884518    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:30.935914    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:31.066730    1308 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1213 08:57:31.066730    1308 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 08:57:31.066730    1308 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:57:31.071708    1308 ssh_runner.go:195] Run: systemctl --version
	I1213 08:57:31.084553    1308 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:57:31.084640    1308 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:57:31.090087    1308 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:57:31.099561    1308 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:57:31.100565    1308 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:57:31.105214    1308 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:57:31.124077    1308 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:57:31.124077    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.124077    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.124648    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:31.147852    1308 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:57:31.152021    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:57:31.174172    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1213 08:57:31.176576    1308 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 08:57:31.176576    1308 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 08:57:31.189695    1308 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:57:31.194128    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:57:31.213650    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.232544    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:57:31.252203    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:57:31.274175    1308 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:57:31.296706    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:57:31.315777    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:57:31.334664    1308 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:57:31.355488    1308 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:57:31.369376    1308 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:57:31.373398    1308 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:57:31.391830    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:31.608372    1308 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:57:31.906123    1308 start.go:496] detecting cgroup driver to use...
	I1213 08:57:31.906123    1308 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:57:31.911089    1308 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 08:57:31.932611    1308 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > [Unit]
	I1213 08:57:31.933145    1308 command_runner.go:130] > Description=Docker Application Container Engine
	I1213 08:57:31.933145    1308 command_runner.go:130] > Documentation=https://docs.docker.com
	I1213 08:57:31.933145    1308 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1213 08:57:31.933145    1308 command_runner.go:130] > Wants=network-online.target containerd.service
	I1213 08:57:31.933145    1308 command_runner.go:130] > Requires=docker.socket
	I1213 08:57:31.933145    1308 command_runner.go:130] > StartLimitBurst=3
	I1213 08:57:31.933239    1308 command_runner.go:130] > StartLimitIntervalSec=60
	I1213 08:57:31.933239    1308 command_runner.go:130] > [Service]
	I1213 08:57:31.933239    1308 command_runner.go:130] > Type=notify
	I1213 08:57:31.933239    1308 command_runner.go:130] > Restart=always
	I1213 08:57:31.933239    1308 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1213 08:57:31.933239    1308 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1213 08:57:31.933303    1308 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1213 08:57:31.933336    1308 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1213 08:57:31.933336    1308 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1213 08:57:31.933336    1308 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1213 08:57:31.933336    1308 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1213 08:57:31.933336    1308 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1213 08:57:31.933336    1308 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1213 08:57:31.933415    1308 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1213 08:57:31.933415    1308 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1213 08:57:31.933498    1308 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNOFILE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitNPROC=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > LimitCORE=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1213 08:57:31.933498    1308 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1213 08:57:31.933498    1308 command_runner.go:130] > TasksMax=infinity
	I1213 08:57:31.933498    1308 command_runner.go:130] > TimeoutStartSec=0
	I1213 08:57:31.933572    1308 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1213 08:57:31.933591    1308 command_runner.go:130] > Delegate=yes
	I1213 08:57:31.933591    1308 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1213 08:57:31.933591    1308 command_runner.go:130] > KillMode=process
	I1213 08:57:31.933591    1308 command_runner.go:130] > OOMScoreAdjust=-500
	I1213 08:57:31.933591    1308 command_runner.go:130] > [Install]
	I1213 08:57:31.933591    1308 command_runner.go:130] > WantedBy=multi-user.target
	I1213 08:57:31.938295    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:31.960377    1308 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:57:32.049121    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:57:32.071680    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:57:32.093496    1308 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:57:32.115103    1308 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1213 08:57:32.119951    1308 ssh_runner.go:195] Run: which cri-dockerd
	I1213 08:57:32.126371    1308 command_runner.go:130] > /usr/bin/cri-dockerd
	I1213 08:57:32.130902    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 08:57:32.144169    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 08:57:32.170348    1308 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 08:57:32.320163    1308 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 08:57:32.454851    1308 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 08:57:32.454851    1308 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 08:57:32.483674    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 08:57:32.505831    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:32.661991    1308 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 08:57:33.665330    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:57:33.689450    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 08:57:33.711087    1308 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 08:57:33.739462    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:33.760714    1308 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 08:57:33.900242    1308 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 08:57:34.052335    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.188283    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 08:57:34.213402    1308 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 08:57:34.237672    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:34.381154    1308 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 08:57:34.499581    1308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 08:57:34.518141    1308 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 08:57:34.522686    1308 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1213 08:57:34.529494    1308 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:57:34.529494    1308 command_runner.go:130] > Device: 0,112	Inode: 1755        Links: 1
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1213 08:57:34.529494    1308 command_runner.go:130] > Access: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Modify: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] > Change: 2025-12-13 08:57:34.386291479 +0000
	I1213 08:57:34.529494    1308 command_runner.go:130] >  Birth: -
	I1213 08:57:34.529494    1308 start.go:564] Will wait 60s for crictl version
	I1213 08:57:34.534224    1308 ssh_runner.go:195] Run: which crictl
	I1213 08:57:34.541202    1308 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:57:34.545269    1308 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:57:34.587655    1308 command_runner.go:130] > Version:  0.1.0
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeName:  docker
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1213 08:57:34.587655    1308 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:57:34.587655    1308 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 08:57:34.590292    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.627699    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.631112    1308 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 08:57:34.669555    1308 command_runner.go:130] > 29.1.2
	I1213 08:57:34.677969    1308 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 08:57:34.681392    1308 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 08:57:34.898094    1308 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 08:57:34.902419    1308 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 08:57:34.910595    1308 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1213 08:57:34.914565    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:34.972832    1308 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:57:34.972832    1308 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:57:34.977045    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.008739    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.008739    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.008739    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.010249    1308 docker.go:621] Images already preloaded, skipping extraction
	I1213 08:57:35.013678    1308 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 08:57:35.043903    1308 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1213 08:57:35.043957    1308 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1213 08:57:35.044022    1308 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:35.044104    1308 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 08:57:35.044104    1308 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:57:35.044160    1308 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 08:57:35.044312    1308 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:57:35.047625    1308 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 08:57:35.491294    1308 command_runner.go:130] > cgroupfs
	I1213 08:57:35.491294    1308 cni.go:84] Creating CNI manager for ""
	I1213 08:57:35.491294    1308 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:57:35.491294    1308 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:57:35.491294    1308 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:57:35.491294    1308 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:57:35.495479    1308 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubeadm
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubectl
	I1213 08:57:35.511680    1308 command_runner.go:130] > kubelet
	I1213 08:57:35.511680    1308 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:57:35.515943    1308 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:57:35.527808    1308 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 08:57:35.545969    1308 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:57:35.565749    1308 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 08:57:35.590269    1308 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:57:35.598806    1308 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:57:35.603098    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:35.752426    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:35.771354    1308 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 08:57:35.771354    1308 certs.go:195] generating shared ca certs ...
	I1213 08:57:35.771354    1308 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:35.771354    1308 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 08:57:35.772397    1308 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 08:57:35.772549    1308 certs.go:257] generating profile certs ...
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 08:57:35.772794    1308 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 08:57:35.773396    1308 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 08:57:35.773447    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:57:35.773539    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:57:35.773616    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:57:35.773761    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:57:35.773831    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:57:35.773939    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:57:35.773999    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:57:35.774105    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:57:35.774559    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 08:57:35.774827    1308 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 08:57:35.774870    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 08:57:35.775069    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 08:57:35.775696    1308 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I1213 08:57:35.775842    1308 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I1213 08:57:35.775842    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:57:35.807179    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:57:35.833688    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:57:35.863566    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:57:35.894920    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:57:35.921314    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 08:57:35.946004    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:57:35.973030    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:57:36.001405    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:57:36.027495    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 08:57:36.053673    1308 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 08:57:36.083163    1308 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:57:36.106205    1308 ssh_runner.go:195] Run: openssl version
	I1213 08:57:36.124518    1308 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:57:36.128653    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.148109    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 08:57:36.170644    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.179909    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.184506    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 08:57:36.230303    1308 command_runner.go:130] > 51391683
	I1213 08:57:36.235418    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:57:36.252420    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.271009    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 08:57:36.291738    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.301002    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.306035    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 08:57:36.348842    1308 command_runner.go:130] > 3ec20f2e
	I1213 08:57:36.353574    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:57:36.371994    1308 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.390417    1308 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:57:36.409132    1308 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.417987    1308 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.418020    1308 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.422336    1308 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:57:36.464222    1308 command_runner.go:130] > b5213941
	I1213 08:57:36.469763    1308 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:57:36.486907    1308 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:57:36.493430    1308 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:57:36.493430    1308 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:57:36.493430    1308 command_runner.go:130] > Access: 2025-12-13 08:53:22.558756963 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Modify: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] > Change: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.493430    1308 command_runner.go:130] >  Birth: 2025-12-13 08:49:20.154446480 +0000
	I1213 08:57:36.498322    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:57:36.542775    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.547618    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:57:36.590488    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.594826    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:57:36.640226    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.644848    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:57:36.698932    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.703709    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:57:36.746225    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.751252    1308 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:57:36.796246    1308 command_runner.go:130] > Certificate will not expire
	I1213 08:57:36.796605    1308 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:57:36.800619    1308 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 08:57:36.835511    1308 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:57:36.848084    1308 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:57:36.848084    1308 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:57:36.848084    1308 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:57:36.853050    1308 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:57:36.866011    1308 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:57:36.869675    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:36.923417    1308 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-482100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.923684    1308 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-482100" cluster setting kubeconfig missing "functional-482100" context setting]
	I1213 08:57:36.923684    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.940090    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.940688    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:36.941864    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:57:36.941864    1308 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:57:36.946352    1308 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:57:36.960987    1308 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 08:57:36.961998    1308 kubeadm.go:602] duration metric: took 113.913ms to restartPrimaryControlPlane
	I1213 08:57:36.961998    1308 kubeadm.go:403] duration metric: took 165.4668ms to StartCluster
	I1213 08:57:36.961998    1308 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.961998    1308 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:36.963076    1308 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:57:36.963883    1308 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 08:57:36.963883    1308 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:57:36.963883    1308 addons.go:70] Setting default-storageclass=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 addons.go:70] Setting storage-provisioner=true in profile "functional-482100"
	I1213 08:57:36.963883    1308 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 08:57:36.963883    1308 addons.go:239] Setting addon storage-provisioner=true in "functional-482100"
	I1213 08:57:36.963883    1308 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-482100"
	I1213 08:57:36.964406    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:36.966968    1308 out.go:179] * Verifying Kubernetes components...
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.972864    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:36.974067    1308 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:57:37.028122    1308 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:57:37.032121    1308 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.032121    1308 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:57:37.035128    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.050133    1308 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:57:37.050133    1308 kapi.go:59] client config for functional-482100: &rest.Config{Host:"https://127.0.0.1:63845", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff744969080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:57:37.051141    1308 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:57:37.051141    1308 addons.go:239] Setting addon default-storageclass=true in "functional-482100"
	I1213 08:57:37.051141    1308 host.go:66] Checking if "functional-482100" exists ...
	I1213 08:57:37.059130    1308 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 08:57:37.090124    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.112122    1308 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.112122    1308 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:57:37.115122    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.124126    1308 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:57:37.163123    1308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 08:57:37.218965    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.244846    1308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 08:57:37.292847    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.297857    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.298846    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 retry.go:31] will retry after 278.997974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.298846    1308 node_ready.go:35] waiting up to 6m0s for node "functional-482100" to be "Ready" ...
	I1213 08:57:37.298846    1308 type.go:168] "Request Body" body=""
	I1213 08:57:37.298846    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:37.300855    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:37.389624    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.394960    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.394960    1308 retry.go:31] will retry after 212.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.583432    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:37.612508    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:37.662694    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.668089    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.668089    1308 retry.go:31] will retry after 421.785382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.691227    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:37.696684    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:37.696684    1308 retry.go:31] will retry after 387.963958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.090409    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.094708    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.167644    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.172931    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.172931    1308 retry.go:31] will retry after 654.783355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.174195    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.178117    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.178179    1308 retry.go:31] will retry after 288.314182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.301152    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:38.301683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:38.304388    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:38.472962    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:38.544996    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.548547    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.548623    1308 retry.go:31] will retry after 1.098701937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.833272    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:38.912142    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:38.912142    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:38.912142    1308 retry.go:31] will retry after 808.399476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.305249    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:39.305249    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:39.308473    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:39.652260    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:39.721531    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726229    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 08:57:39.726899    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.726899    1308 retry.go:31] will retry after 1.580407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.799856    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:39.802238    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:39.802238    1308 retry.go:31] will retry after 1.163449845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:40.308791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:40.308791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:40.310792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:40.971107    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:41.051235    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.056481    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.056595    1308 retry.go:31] will retry after 2.292483012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.312219    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:41.312219    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:41.313763    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:41.315446    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:41.385280    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:41.389328    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:41.389328    1308 retry.go:31] will retry after 2.10655749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:42.316064    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:42.316469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:42.319430    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.319659    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:43.319659    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:43.322154    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:43.354119    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:43.424936    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.428566    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.428566    1308 retry.go:31] will retry after 2.451441131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.500768    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:43.577861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:43.581800    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:43.581870    1308 retry.go:31] will retry after 1.842575818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:44.322393    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:44.322393    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:44.326064    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.326352    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:45.326352    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:45.329823    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:45.430441    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:45.504084    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.509721    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.509813    1308 retry.go:31] will retry after 3.320490506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.885819    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:45.962560    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:45.966882    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:45.966882    1308 retry.go:31] will retry after 5.131341184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:46.330362    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:46.330362    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:46.333170    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:47.333778    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:47.333778    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.337260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:57:47.337260    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:47.337260    1308 type.go:168] "Request Body" body=""
	I1213 08:57:47.337260    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:47.340404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.340937    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:48.340937    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:48.344443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:48.835623    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:48.914169    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:48.918486    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:48.918486    1308 retry.go:31] will retry after 6.605490232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:49.345162    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:49.345162    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:49.347526    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:50.348478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:50.348478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:50.351813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:51.103982    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:51.174396    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:51.177073    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.177136    1308 retry.go:31] will retry after 4.217545245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:51.352019    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:51.352363    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:51.354826    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:52.355908    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:52.355908    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:52.358993    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:53.359347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:53.359730    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:53.362425    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:54.363245    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:54.363536    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:54.366267    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:57:55.367715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:55.367715    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:55.371143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:55.400351    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:57:55.476385    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.480063    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.480122    1308 retry.go:31] will retry after 11.422205159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.528824    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:57:55.599872    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:57:55.604580    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:55.604626    1308 retry.go:31] will retry after 13.338795854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:57:56.371517    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:56.371517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:56.375228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:57.375899    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:57.375899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.378899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:57:57.379427    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:57:57.379613    1308 type.go:168] "Request Body" body=""
	I1213 08:57:57.379640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:57.381380    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 08:57:58.382025    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:58.382025    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:58.385451    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:57:59.385982    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:57:59.386304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:57:59.388570    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:00.389156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:00.389156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:00.393493    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:01.394059    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:01.394059    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:01.397148    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:02.397228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:02.397593    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:02.400363    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:03.400715    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:03.401100    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:03.403595    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:04.404146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:04.404146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:04.407029    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:05.407299    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:05.407299    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:05.409705    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:06.410552    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:06.410552    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:06.413575    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:06.907694    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:06.989453    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:06.993505    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:06.993505    1308 retry.go:31] will retry after 9.12046724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:07.413861    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:07.413861    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.423766    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=9
	W1213 08:58:07.423766    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:07.423766    1308 type.go:168] "Request Body" body=""
	I1213 08:58:07.423766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:07.426420    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.426748    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:08.426748    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:08.429523    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:08.949269    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:09.021443    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:09.021574    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.021574    1308 retry.go:31] will retry after 18.212645226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:09.429654    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:09.429654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:09.434475    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:10.434763    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:10.434763    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:10.438337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:11.438992    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:11.438992    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:11.442157    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:12.442370    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:12.442370    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:12.445441    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:13.446557    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:13.446557    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:13.449579    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:14.449909    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:14.449909    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:14.453875    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:15.453999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:15.454347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:15.457109    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:16.119722    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:16.199861    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:16.203796    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.203841    1308 retry.go:31] will retry after 32.127892546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:16.457492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:16.457492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:16.460671    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:17.461098    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:17.461098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.464303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:17.464392    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:17.464557    1308 type.go:168] "Request Body" body=""
	I1213 08:58:17.464596    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:17.466792    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:18.467178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:18.467178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:18.471411    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:19.472813    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:19.472813    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:19.475365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:20.475825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:20.475825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:20.478756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:21.479284    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:21.479284    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:21.482725    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:22.483047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:22.483047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:22.486928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:23.487680    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:23.487680    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:23.491133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:24.491850    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:24.492121    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:24.495131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:25.495436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:25.495893    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:25.498242    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:26.498882    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:26.498882    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:26.501986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:27.239685    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:27.315134    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:27.318446    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.318446    1308 retry.go:31] will retry after 22.292291086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:27.502907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:27.502907    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.505700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:27.505700    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:27.505700    1308 type.go:168] "Request Body" body=""
	I1213 08:58:27.505700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:27.508521    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:28.509510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:28.509510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:28.512707    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:29.513169    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:29.513169    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:29.516081    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:30.517601    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:30.517601    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:30.520368    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:31.520700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:31.521119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:31.524120    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:32.524848    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:32.524848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:32.528137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:33.529023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:33.529412    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:33.532996    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:34.533392    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:34.533697    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:34.536406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:35.536910    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:35.536910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:35.539801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:36.540290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:36.540290    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:36.543462    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:37.544092    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:37.544398    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.547080    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:37.547165    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:37.547240    1308 type.go:168] "Request Body" body=""
	I1213 08:58:37.547322    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:37.549686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:38.550568    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:38.550568    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:38.554061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:39.554545    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:39.554545    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:39.556910    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:40.557343    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:40.557343    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:40.562456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 08:58:41.563271    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:41.563271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:41.566401    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:42.566676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:42.566676    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:42.569495    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:43.570436    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:43.570436    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:43.573856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:44.574034    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:44.574034    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:44.576971    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:45.577736    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:45.577736    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:45.580563    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:46.580998    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:46.580998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:46.584404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:47.585574    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:47.585574    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.589116    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:58:47.589116    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:47.589285    1308 type.go:168] "Request Body" body=""
	I1213 08:58:47.589330    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:47.591421    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:48.337063    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:58:48.419155    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:48.419236    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.419312    1308 retry.go:31] will retry after 42.344315794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:48.592137    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:48.592503    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:48.594564    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.594849    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:49.594849    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:49.598177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:49.616306    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:58:49.690748    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:58:49.696226    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:49.696226    1308 retry.go:31] will retry after 43.889805704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:58:50.598940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:50.598940    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:50.602650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:51.602781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:51.602781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:51.606654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:52.607136    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:52.607136    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:52.610410    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:53.610695    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:53.611291    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:53.614086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:54.614262    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:54.614262    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:54.617596    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:55.618389    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:55.618389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:55.621130    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:56.621484    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:56.621936    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:56.626456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:58:57.626653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:57.626653    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.630131    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:58:57.630131    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:58:57.630323    1308 type.go:168] "Request Body" body=""
	I1213 08:58:57.630411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:57.632861    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:58:58.633441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:58.634089    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:58.637246    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:58:59.637793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:58:59.638147    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:58:59.641409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:00.641531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:00.641871    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:00.644335    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:01.644762    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:01.644762    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:01.647872    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:02.648069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:02.648069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:02.651180    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:03.651302    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:03.651302    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:03.654332    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:04.654665    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:04.654665    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:04.657952    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:05.658178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:05.658178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:05.662672    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:06.663347    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:06.663347    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:06.666728    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:07.667532    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:07.667885    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.670688    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:07.670852    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:07.670996    1308 type.go:168] "Request Body" body=""
	I1213 08:59:07.671070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:07.675143    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:08.675540    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:08.675540    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:08.679392    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:09.679704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:09.679704    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:09.683514    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:10.683721    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:10.683721    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:10.686924    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:11.687492    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:11.687492    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:11.691432    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:12.692349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:12.692349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:12.695226    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:13.696218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:13.696218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:13.699830    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:14.700112    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:14.700547    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:14.704305    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:15.704907    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:15.705360    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:15.708341    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:16.709464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:16.709464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:16.712813    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:17.713633    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:17.713633    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.716674    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:17.716674    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:17.717197    1308 type.go:168] "Request Body" body=""
	I1213 08:59:17.717271    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:17.719337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:18.719797    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:18.720188    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:18.722856    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:19.723497    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:19.723497    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:19.726804    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:20.727069    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:20.727069    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:20.730372    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:21.730641    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:21.731118    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:21.733699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:22.734052    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:22.734386    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:22.736600    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:23.737452    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:23.737452    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:23.741063    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:24.741313    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:24.741698    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:24.744012    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:25.745187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:25.745474    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:25.747751    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:26.748382    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:26.748382    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:26.751104    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:27.752196    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:27.752196    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.755077    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:27.755077    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:27.755077    1308 type.go:168] "Request Body" body=""
	I1213 08:59:27.755077    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:27.757683    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:28.757960    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:28.757960    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:28.760904    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:29.761133    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:29.761133    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:29.763899    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:30.764845    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:30.764845    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:30.768756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:30.770278    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:59:31.058703    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:31.062891    1308 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:31.769017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:31.769017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:31.771943    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:32.772681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:32.772681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:32.775498    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:33.593527    1308 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:59:33.670412    1308 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:59:33.677065    1308 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:59:33.680151    1308 out.go:179] * Enabled addons: 
	I1213 08:59:33.683381    1308 addons.go:530] duration metric: took 1m56.7187029s for enable addons: enabled=[]
	I1213 08:59:33.775980    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:33.775980    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:33.778203    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:34.778690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:34.778690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:34.781770    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:35.781942    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:35.781942    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:35.784945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:36.785401    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:36.785401    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:36.788220    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:37.789140    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:37.789517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.792181    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 08:59:37.792181    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:37.792181    1308 type.go:168] "Request Body" body=""
	I1213 08:59:37.792181    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:37.795747    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:38.796718    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:38.797057    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:38.799771    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:39.801087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:39.801087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:39.804280    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:40.804974    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:40.804974    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:40.809756    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:41.810478    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:41.810478    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:41.813284    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:42.814032    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:42.814032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:42.816801    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:43.817275    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:43.817275    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:43.820177    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:44.821316    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:44.821679    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:44.824610    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:45.825090    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:45.825090    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:45.828480    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:46.828785    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:46.828785    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:46.832994    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 08:59:47.833561    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:47.833561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.837174    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:47.837252    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:47.837252    1308 type.go:168] "Request Body" body=""
	I1213 08:59:47.837252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:47.840387    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:48.841463    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:48.841463    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:48.844528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:49.844825    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:49.844825    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:49.847708    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:50.848539    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:50.848539    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:50.851697    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:51.852049    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:51.852049    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:51.855078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:52.855723    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:52.855723    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:52.859507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:53.860458    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:53.860752    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:53.863700    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:54.864320    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:54.864320    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:54.867415    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:55.868074    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:55.868074    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:55.871228    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:56.871814    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:56.871814    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:56.874839    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:57.875317    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:57.875317    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.878442    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 08:59:57.878519    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 08:59:57.878660    1308 type.go:168] "Request Body" body=""
	I1213 08:59:57.878732    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:57.881312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 08:59:58.881649    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:58.881649    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:58.885078    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 08:59:59.885307    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 08:59:59.885744    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 08:59:59.889776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:00.889947    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:00.889947    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:00.893124    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:01.893793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:01.893793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:01.897038    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:02.898350    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:02.898350    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:02.901920    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:03.902381    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:03.902770    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:03.905459    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:04.905970    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:04.905970    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:04.909229    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:05.909755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:05.909755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:05.912940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:06.913579    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:06.913910    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:06.916237    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:07.917204    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:07.917204    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.920022    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:07.920154    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:07.920228    1308 type.go:168] "Request Body" body=""
	I1213 09:00:07.920228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:07.922914    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:08.923390    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:08.923390    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:08.927291    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:09.927446    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:09.927446    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:09.930102    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:10.931065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:10.931065    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:10.933557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:11.934252    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:11.934252    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:11.937814    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:12.938281    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:12.938281    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:12.941107    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:13.941334    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:13.941334    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:13.944744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:14.945212    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:14.945453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:14.948167    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:15.948418    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:15.948418    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:15.951926    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:16.952429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:16.952429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:16.955133    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:17.955919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:17.956313    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.960472    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:00:17.960625    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:17.960654    1308 type.go:168] "Request Body" body=""
	I1213 09:00:17.960654    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:17.962719    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:18.964051    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:18.964051    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:18.966984    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:19.967156    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:19.967156    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:19.970136    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:20.970477    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:20.970477    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:20.973611    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:21.974523    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:21.974905    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:21.977137    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:22.977950    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:22.978189    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:22.981086    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:23.982033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:23.982033    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:23.984606    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:24.985513    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:24.985513    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:24.988324    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:25.988590    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:25.988590    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:25.991603    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:26.992676    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:26.992930    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:26.994776    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:27.995464    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:27.995464    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:27.998145    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:27.998665    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:27.998851    1308 type.go:168] "Request Body" body=""
	I1213 09:00:27.998921    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:28.001309    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:29.001457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:29.001457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:29.004871    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:30.005255    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:30.005617    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:30.008184    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:31.008410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:31.008410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:31.011873    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:32.012490    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:32.012848    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:32.015810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:33.016170    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:33.016170    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:33.017538    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:00:34.019586    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:34.019586    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:34.022235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:35.022955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:35.022955    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:35.026485    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:36.027689    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:36.027689    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:36.030650    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:37.031027    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:37.031027    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:37.034168    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:38.034637    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:38.034637    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.038073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:00:38.038187    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:38.038337    1308 type.go:168] "Request Body" body=""
	I1213 09:00:38.038396    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:38.040656    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:39.041187    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:39.041187    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:39.044199    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:40.044653    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:40.045022    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:40.048245    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:41.048510    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:41.048510    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:41.052268    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:42.053226    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:42.053226    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:42.056222    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:43.056546    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:43.056546    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:43.059398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:44.059625    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:44.059625    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:44.062923    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:45.063384    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:45.063384    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:45.066631    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:46.067306    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:46.067306    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:46.070443    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:47.070777    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:47.070777    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:47.073795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:48.074558    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:48.074558    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.077853    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:48.077917    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:48.078016    1308 type.go:168] "Request Body" body=""
	I1213 09:00:48.078098    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:48.080934    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:49.082070    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:49.082070    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:49.084982    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:50.085640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:50.085640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:50.088925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:51.089700    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:51.089700    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:51.092744    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:52.093791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:52.093791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:52.096573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:53.097781    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:53.097781    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:53.100957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:54.101759    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:54.101759    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:54.104615    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:55.105494    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:55.105919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:55.109444    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:00:56.110146    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:56.110146    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:56.114930    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:00:57.115147    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:57.115467    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:57.118438    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:58.119483    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:58.119483    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.122648    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:00:58.122648    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:00:58.122648    1308 type.go:168] "Request Body" body=""
	I1213 09:00:58.123185    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:58.125195    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:00:59.125875    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:00:59.125875    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:00:59.129393    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:00.129668    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:00.129668    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:00.132627    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:01.133033    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:01.133525    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:01.136658    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:02.137163    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:02.137163    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:02.140403    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:03.140588    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:03.140588    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:03.143578    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:04.144312    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:04.144312    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:04.147391    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:05.148065    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:05.148453    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:05.152235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:06.152555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:06.152555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:06.155862    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:07.156337    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:07.156337    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:07.159561    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:08.160007    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:08.160007    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.163399    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:01:08.163399    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:08.163399    1308 type.go:168] "Request Body" body=""
	I1213 09:01:08.163399    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:08.165301    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:09.166036    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:09.166036    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:09.169312    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:10.170153    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:10.170153    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:10.173337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:11.173766    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:11.173766    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:11.176583    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:12.177289    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:12.177289    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:12.180992    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:13.181441    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:13.181441    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:13.183966    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:14.185028    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:14.185028    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:14.189060    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:15.189819    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:15.190274    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:15.193013    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:16.193531    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:16.193531    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:16.196639    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:17.197877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:17.197877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:17.201511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:18.201776    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:18.201776    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.204748    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:18.204825    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:18.204913    1308 type.go:168] "Request Body" body=""
	I1213 09:01:18.204983    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:18.206713    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:01:19.207179    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:19.207179    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:19.210389    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:20.210678    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:20.210678    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:20.213343    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:21.213955    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:21.214383    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:21.217244    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:22.217764    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:22.217764    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:22.221016    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:23.221538    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:23.222082    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:23.225141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:24.225563    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:24.225563    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:24.228842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:25.229501    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:25.229896    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:25.232481    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:26.232855    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:26.232855    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:26.235225    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:27.235999    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:27.235999    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:27.239007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:28.239290    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:28.239796    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.242163    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:28.242163    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:28.242754    1308 type.go:168] "Request Body" body=""
	I1213 09:01:28.242754    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:28.245406    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:29.246227    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:29.246227    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:29.249049    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:30.249528    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:30.249528    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:30.252945    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:31.253720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:31.253720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:31.257007    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:32.257727    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:32.257727    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:32.260807    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:33.261355    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:33.261355    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:33.264412    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:34.265479    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:34.265479    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:34.268382    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:35.269039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:35.269258    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:35.271838    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:36.272075    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:36.272075    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:36.275197    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:37.275934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:37.275934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:37.280528    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:38.281387    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:38.281707    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.284450    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:38.284566    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:38.284566    1308 type.go:168] "Request Body" body=""
	I1213 09:01:38.284566    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:38.287277    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:39.287457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:39.287457    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:39.290889    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:40.291630    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:40.291630    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:40.295337    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:41.295926    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:41.296353    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:41.299053    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:42.300178    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:42.300178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:42.303160    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:43.304403    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:43.305041    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:43.309194    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:44.310087    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:44.310087    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:44.312799    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:45.313738    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:45.313738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:45.317911    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:46.319411    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:46.319411    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:46.323036    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:47.323495    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:47.323495    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:47.326782    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:48.327222    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:48.327222    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.331951    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:01:48.331951    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:48.331951    1308 type.go:168] "Request Body" body=""
	I1213 09:01:48.331951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:48.336553    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:49.337686    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:49.337686    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:49.340983    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:50.342115    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:50.342115    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:50.344717    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:51.345242    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:51.345242    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:51.347895    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:52.348829    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:52.348829    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:52.353265    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:53.353621    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:53.353621    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:53.356851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:54.357643    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:54.357643    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:54.360716    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:55.361583    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:55.361583    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:55.364202    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:56.364951    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:56.364951    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:56.368507    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:01:57.368791    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:57.368791    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:57.373234    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:01:58.373801    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:58.373801    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.376426    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:01:58.376426    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:01:58.376426    1308 type.go:168] "Request Body" body=""
	I1213 09:01:58.377111    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:58.379740    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:01:59.379930    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:01:59.380415    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:01:59.383047    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:00.384221    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:00.384221    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:00.387516    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:01.388029    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:01.388029    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:01.392383    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:02.392602    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:02.392956    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:02.396482    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:03.397017    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:03.397017    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:03.400427    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:04.400756    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:04.400756    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:04.404303    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:05.404720    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:05.404720    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:05.408936    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:06.409154    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:06.409154    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:06.412227    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:07.412599    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:07.412599    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:07.415247    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:08.415920    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:08.415920    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.419260    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:08.419342    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:08.419400    1308 type.go:168] "Request Body" body=""
	I1213 09:02:08.419400    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:08.421925    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:09.422119    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:09.422119    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:09.424626    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:10.426518    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:10.426518    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:10.430645    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:11.431039    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:11.431039    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:11.434110    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:12.434291    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:12.434618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:12.437021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:13.437858    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:13.437858    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:13.440822    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:14.441345    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:14.441345    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:14.444544    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:15.444691    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:15.444691    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:15.447957    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:16.448990    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:16.448990    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:16.452282    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:17.452755    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:17.452755    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:17.456404    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:18.456603    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:18.456603    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.459851    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:02:18.459890    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:18.460020    1308 type.go:168] "Request Body" body=""
	I1213 09:02:18.460056    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:18.462654    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:19.463750    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:19.463750    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:19.468416    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:02:20.469429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:20.469429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:20.472907    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:21.473388    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:21.473388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:21.476318    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:22.477206    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:22.477206    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:22.480424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:23.481047    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:23.481047    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:23.484298    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:24.484704    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:24.485032    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:24.487973    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:25.488079    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:25.488079    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:25.490531    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:26.491438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:26.491657    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:26.493750    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:27.494247    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:27.494247    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:27.497209    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:28.497919    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:28.497919    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.500562    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:28.500562    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:28.500562    1308 type.go:168] "Request Body" body=""
	I1213 09:02:28.500562    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:28.503165    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:29.504103    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:29.504388    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:29.507059    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:30.507476    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:30.507476    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:30.510269    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:31.510555    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:31.510555    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:31.513340    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:32.513618    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:32.513618    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:32.517141    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:33.518021    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:33.518021    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:33.520581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:34.521438    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:34.521438    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:34.524010    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:35.524429    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:35.524429    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:35.527863    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:36.528126    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:36.528517    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:36.531666    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:37.532410    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:37.532410    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:37.534749    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:38.535244    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:38.535670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.538511    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:38.538511    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:38.538511    1308 type.go:168] "Request Body" body=""
	I1213 09:02:38.538511    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:38.541809    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:39.542847    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:39.542998    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:39.545686    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:40.546120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:40.546120    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:40.548869    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:41.549367    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:41.549743    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:41.551917    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:42.552928    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:42.552928    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:42.555928    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:43.556215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:43.556215    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:43.563039    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1213 09:02:44.563600    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:44.563600    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:44.566791    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:45.568004    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:45.568004    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:45.570729    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:46.570877    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:46.570877    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:46.573554    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:47.574011    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:47.574011    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:47.576970    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:48.577457    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:48.577800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.582090    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:02:48.582143    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:48.582262    1308 type.go:168] "Request Body" body=""
	I1213 09:02:48.582389    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:48.586235    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:49.586627    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:49.586627    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:49.589468    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:50.589681    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:50.589681    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:50.592409    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:51.593243    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:51.593243    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:51.596241    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:52.596400    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:52.596738    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:52.599767    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:53.600526    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:53.600526    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:53.603709    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:54.604023    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:54.604023    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:54.607315    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:55.607800    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:55.607800    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:55.609797    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:56.610964    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:56.610964    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:56.613665    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:02:57.615191    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:57.615191    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:57.618842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:02:58.619640    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:58.619640    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.622361    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:02:58.622361    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:02:58.622361    1308 type.go:168] "Request Body" body=""
	I1213 09:02:58.622899    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:58.625164    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:02:59.625440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:02:59.625440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:02:59.628095    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:00.628841    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:00.628841    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:00.632573    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:01.632870    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:01.632870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:01.636028    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:02.636954    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:02.636954    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:02.640488    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:03.640838    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:03.640838    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:03.643811    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:04.644591    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:04.644591    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:04.647706    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:05.648327    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:05.648327    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:05.651557    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:06.651787    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:06.651787    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:06.655775    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:07.656509    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:07.656509    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:07.659073    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:08.659268    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:08.659268    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.662810    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1213 09:03:08.662901    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:08.663011    1308 type.go:168] "Request Body" body=""
	I1213 09:03:08.663085    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:08.664787    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 09:03:09.665751    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:09.665872    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:09.668793    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:10.669211    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:10.669211    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:10.671961    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:11.672215    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:11.672561    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:11.675173    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:12.675670    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:12.675670    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:12.679515    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:13.679795    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:13.679795    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:13.682609    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:14.682918    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:14.682918    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:14.685424    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:15.686129    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:15.686129    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:15.690757    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 09:03:16.690957    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:16.690957    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:16.693958    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:17.694690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:17.694690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:17.697021    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:18.697793    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:18.697793    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.700581    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:18.701115    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:18.701215    1308 type.go:168] "Request Body" body=""
	I1213 09:03:18.701304    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:18.703652    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:19.703940    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:19.704228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:19.706359    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:20.707349    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:20.707349    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:20.710940    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:21.711541    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:21.711541    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:21.714696    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:22.715218    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:22.715218    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:22.718795    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:23.719440    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:23.719440    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:23.722635    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:24.723237    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:24.723237    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:24.726985    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:25.727683    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:25.727683    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:25.730456    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:26.730527    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:26.730971    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:26.733842    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:27.735120    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:27.735493    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:27.737796    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:28.738333    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:28.738714    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.741699    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1213 09:03:28.741800    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): Get "https://127.0.0.1:63845/api/v1/nodes/functional-482100": EOF
	I1213 09:03:28.741870    1308 type.go:168] "Request Body" body=""
	I1213 09:03:28.741870    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:28.744398    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:29.744620    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:29.744620    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:29.747986    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:30.748934    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:30.748934    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:30.751365    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:31.752294    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:31.752294    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:31.755860    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1213 09:03:32.756228    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:32.756228    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:32.758997    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:33.759818    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:33.759818    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:33.762321    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:34.763690    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:34.763690    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:34.770061    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 09:03:35.770469    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:35.770469    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:35.773118    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 09:03:36.773842    1308 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:63845/api/v1/nodes/functional-482100"
	I1213 09:03:36.774178    1308 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:63845/api/v1/nodes/functional-482100" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1213 09:03:36.778885    1308 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1213 09:03:37.302575    1308 node_ready.go:55] error getting node "functional-482100" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 09:03:37.302575    1308 node_ready.go:38] duration metric: took 6m0.0011646s for node "functional-482100" to be "Ready" ...
	I1213 09:03:37.305847    1308 out.go:203] 
	W1213 09:03:37.307851    1308 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 09:03:37.307851    1308 out.go:285] * 
	W1213 09:03:37.311623    1308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:03:37.314310    1308 out.go:203] 
	
	
	==> Docker <==
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525747623Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525754023Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525775925Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.525849730Z" level=info msg="Initializing buildkit"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.646190196Z" level=info msg="Completed buildkit initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655073529Z" level=info msg="Daemon has completed initialization"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655186237Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655229540Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 08:57:33 functional-482100 dockerd[10526]: time="2025-12-13T08:57:33.655448956Z" level=info msg="API listen on [::]:2376"
	Dec 13 08:57:33 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:33 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 08:57:33 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 08:57:34 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Loaded network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 08:57:34 functional-482100 cri-dockerd[10847]: time="2025-12-13T08:57:34Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 08:57:34 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:41.260540   21111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:41.261746   21111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:41.262910   21111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:41.264266   21111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:41.265180   21111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000739] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000891] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001020] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001158] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001174] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 08:57] CPU: 3 PID: 54870 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000964] RIP: 0033:0x7f5dc4ba4b20
	[  +0.000410] Code: Unable to access opcode bytes at RIP 0x7f5dc4ba4af6.
	[  +0.000689] RSP: 002b:00007ffdbe9599f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000820] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000875] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001112] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001539] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001199] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001222] FS:  0000000000000000 GS:  0000000000000000
	[  +0.961990] CPU: 3 PID: 54996 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000796] RIP: 0033:0x7f46e6061b20
	[  +0.000388] Code: Unable to access opcode bytes at RIP 0x7f46e6061af6.
	[  +0.000654] RSP: 002b:00007ffd6f1408e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000776] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:06:41 up 42 min,  0 user,  load average: 0.57, 0.44, 0.57
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:06:37 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:38 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1059.
	Dec 13 09:06:38 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:38 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:38 functional-482100 kubelet[20955]: E1213 09:06:38.536128   20955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:38 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:38 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:39 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1060.
	Dec 13 09:06:39 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:39 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:39 functional-482100 kubelet[20967]: E1213 09:06:39.277167   20967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:39 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:39 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:39 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1061.
	Dec 13 09:06:39 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:39 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:40 functional-482100 kubelet[20979]: E1213 09:06:40.036196   20979 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:40 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:40 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:40 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1062.
	Dec 13 09:06:40 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:40 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:40 functional-482100 kubelet[21006]: E1213 09:06:40.795983   21006 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:40 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:40 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (558.4754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (53.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (741.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 09:07:36.666641    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:08:59.741879    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:09:46.007961    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:12:36.669955    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:12:49.083262    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:14:46.011088    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:17:36.673338    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m17.7619152s)

                                                
                                                
-- stdout --
	* [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00081318s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m17.7711145s for "functional-482100" cluster.
I1213 09:19:00.439614    2968 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (609.8012ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.2907831s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh     │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image   │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete  │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start   │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start   │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.1                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.3                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:latest                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add minikube-local-cache-test:functional-482100                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache delete minikube-local-cache-test:functional-482100                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl images                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ cache   │ functional-482100 cache reload                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ kubectl │ functional-482100 kubectl -- --context functional-482100 get pods                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ start   │ -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:06:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:06:42.717723    4604 out.go:360] Setting OutFile to fd 964 ...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.759720    4604 out.go:374] Setting ErrFile to fd 1684...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.775684    4604 out.go:368] Setting JSON to false
	I1213 09:06:42.778565    4604 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2610,"bootTime":1765614192,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:06:42.778565    4604 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:06:42.783192    4604 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:06:42.786187    4604 notify.go:221] Checking for updates...
	I1213 09:06:42.786345    4604 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:06:42.788643    4604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:06:42.791579    4604 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:06:42.793982    4604 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:06:42.796424    4604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:06:42.798851    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:42.799423    4604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:06:42.991260    4604 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:06:42.994416    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.223298    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.202416057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.228742    4604 out.go:179] * Using the docker driver based on existing profile
	I1213 09:06:43.237191    4604 start.go:309] selected driver: docker
	I1213 09:06:43.237191    4604 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.238191    4604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:06:43.244191    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.469724    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.451401286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.566702    4604 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:06:43.567247    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:43.567332    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:43.567332    4604 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.571338    4604 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 09:06:43.574242    4604 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 09:06:43.576258    4604 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:06:43.580317    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:43.580377    4604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:06:43.580526    4604 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 09:06:43.580526    4604 cache.go:65] Caching tarball of preloaded images
	I1213 09:06:43.580984    4604 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:06:43.581085    4604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 09:06:43.581294    4604 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 09:06:43.661395    4604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:06:43.661446    4604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:06:43.661502    4604 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:06:43.661597    4604 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:06:43.661744    4604 start.go:364] duration metric: took 97.5µs to acquireMachinesLock for "functional-482100"
	I1213 09:06:43.661894    4604 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:06:43.661968    4604 fix.go:54] fixHost starting: 
	I1213 09:06:43.668789    4604 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 09:06:43.726255    4604 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 09:06:43.726255    4604 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:06:43.729251    4604 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 09:06:43.729251    4604 machine.go:94] provisionDockerMachine start ...
	I1213 09:06:43.733252    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:43.788369    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:43.788946    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:43.788946    4604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:06:43.970841    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:43.970841    4604 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 09:06:43.974885    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.031548    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.032011    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.032011    4604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 09:06:44.226185    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:44.230480    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.283942    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.284648    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.284648    4604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:06:44.459239    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:44.459239    4604 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 09:06:44.459239    4604 ubuntu.go:190] setting up certificates
	I1213 09:06:44.459239    4604 provision.go:84] configureAuth start
	I1213 09:06:44.464098    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:44.517408    4604 provision.go:143] copyHostCerts
	I1213 09:06:44.518409    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 09:06:44.518409    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 09:06:44.518409    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 09:06:44.519524    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 09:06:44.519524    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 09:06:44.519524    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 09:06:44.520761    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 09:06:44.520761    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 09:06:44.520761    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 09:06:44.521333    4604 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 09:06:44.683862    4604 provision.go:177] copyRemoteCerts
	I1213 09:06:44.688852    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:06:44.691943    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.744886    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:44.879038    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:06:44.911005    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:06:44.941373    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:06:44.969809    4604 provision.go:87] duration metric: took 510.5655ms to configureAuth
	I1213 09:06:44.969809    4604 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:06:44.970648    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:44.974094    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.031966    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.032404    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.032404    4604 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 09:06:45.211091    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 09:06:45.211091    4604 ubuntu.go:71] root file system type: overlay
	I1213 09:06:45.211091    4604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 09:06:45.214999    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.278005    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.278423    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.278519    4604 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 09:06:45.475276    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 09:06:45.478711    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.533172    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.533745    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.533745    4604 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 09:06:45.728810    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:45.728810    4604 machine.go:97] duration metric: took 1.999543s to provisionDockerMachine
	I1213 09:06:45.728810    4604 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 09:06:45.728810    4604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:06:45.732939    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:06:45.736061    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.792193    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:45.929940    4604 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:06:45.938024    4604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:06:45.938024    4604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:06:45.938024    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 09:06:45.940034    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 09:06:45.944509    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 09:06:45.956570    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 09:06:45.988344    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 09:06:46.020180    4604 start.go:296] duration metric: took 291.3676ms for postStartSetup
	I1213 09:06:46.024635    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:06:46.027628    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.080253    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.215093    4604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:06:46.224875    4604 fix.go:56] duration metric: took 2.5628868s for fixHost
	I1213 09:06:46.224875    4604 start.go:83] releasing machines lock for "functional-482100", held for 2.5631106s
	I1213 09:06:46.227979    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:46.281460    4604 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 09:06:46.284589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.284589    4604 ssh_runner.go:195] Run: cat /version.json
	I1213 09:06:46.287589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.339381    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.341884    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	W1213 09:06:46.471031    4604 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 09:06:46.475772    4604 ssh_runner.go:195] Run: systemctl --version
	I1213 09:06:46.491471    4604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:06:46.501246    4604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:06:46.506902    4604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:06:46.521536    4604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:06:46.521536    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:46.521536    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:46.521536    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:46.547922    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:06:46.569619    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:06:46.584943    4604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:06:46.588980    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1213 09:06:46.598267    4604 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 09:06:46.598267    4604 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 09:06:46.612904    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.631660    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:06:46.651016    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.672904    4604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:06:46.691930    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:06:46.710477    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:06:46.730250    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:06:46.750913    4604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:06:46.770554    4604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:06:46.792378    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.034402    4604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:06:47.276302    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:47.276363    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:47.280722    4604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 09:06:47.305066    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.327135    4604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:06:47.404977    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.431107    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:06:47.450015    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:47.478646    4604 ssh_runner.go:195] Run: which cri-dockerd
	I1213 09:06:47.491243    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 09:06:47.503124    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 09:06:47.527239    4604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 09:06:47.667767    4604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 09:06:47.799062    4604 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 09:06:47.799062    4604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 09:06:47.826470    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 09:06:47.848448    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.994955    4604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 09:06:48.954293    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:06:48.976829    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 09:06:49.001926    4604 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 09:06:49.028432    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.050748    4604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 09:06:49.205807    4604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 09:06:49.342941    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.483831    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 09:06:49.508934    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 09:06:49.531916    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.703017    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 09:06:49.814910    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.832973    4604 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 09:06:49.837568    4604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 09:06:49.846585    4604 start.go:564] Will wait 60s for crictl version
	I1213 09:06:49.850486    4604 ssh_runner.go:195] Run: which crictl
	I1213 09:06:49.861564    4604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:06:49.905261    4604 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 09:06:49.909293    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.949851    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.999228    4604 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 09:06:50.003267    4604 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 09:06:50.178404    4604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 09:06:50.184053    4604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 09:06:50.194897    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:50.254370    4604 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 09:06:50.256155    4604 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:06:50.256766    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:50.259593    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.291635    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.291635    4604 docker.go:621] Images already preloaded, skipping extraction
	I1213 09:06:50.295568    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.325004    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.325004    4604 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:06:50.325004    4604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 09:06:50.325004    4604 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:06:50.328257    4604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 09:06:50.622080    4604 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 09:06:50.622145    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:50.622145    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:50.622208    4604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:06:50.622208    4604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:06:50.622373    4604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:06:50.626372    4604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:06:50.640912    4604 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:06:50.644769    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:06:50.657199    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 09:06:50.677193    4604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:06:50.697253    4604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1213 09:06:50.723871    4604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:06:50.735113    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:50.895085    4604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:06:51.205789    4604 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 09:06:51.205789    4604 certs.go:195] generating shared ca certs ...
	I1213 09:06:51.205789    4604 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:06:51.206694    4604 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 09:06:51.206931    4604 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 09:06:51.207202    4604 certs.go:257] generating profile certs ...
	I1213 09:06:51.207247    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 09:06:51.208796    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 09:06:51.208796    4604 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 09:06:51.209325    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 09:06:51.210415    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 09:06:51.211988    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:06:51.241166    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:06:51.270190    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:06:51.305732    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:06:51.336212    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:06:51.365643    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:06:51.395250    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:06:51.426424    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:06:51.456416    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:06:51.485568    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 09:06:51.513607    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 09:06:51.544659    4604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:06:51.569245    4604 ssh_runner.go:195] Run: openssl version
	I1213 09:06:51.589082    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.610612    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:06:51.632111    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.640287    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.644860    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.695068    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:06:51.712089    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.730159    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 09:06:51.750455    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.759490    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.764057    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.813702    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:06:51.830987    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.848737    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 09:06:51.866735    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.874087    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.878230    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.926970    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:06:51.943705    4604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:06:51.956247    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:06:52.006902    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:06:52.056817    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:06:52.106649    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:06:52.159409    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:06:52.206463    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:06:52.251679    4604 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:52.256595    4604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.289711    4604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:06:52.303076    4604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:06:52.303076    4604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:06:52.307600    4604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:06:52.319493    4604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.323244    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:52.375973    4604 kubeconfig.go:125] found "functional-482100" server: "https://127.0.0.1:63845"
	I1213 09:06:52.384564    4604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:06:52.400436    4604 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:49:19.464397186 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 09:06:50.708121923 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 09:06:52.400436    4604 kubeadm.go:1161] stopping kube-system containers ...
	I1213 09:06:52.404765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.439058    4604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 09:06:52.463926    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:06:52.476815    4604 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 08:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:53 /etc/kubernetes/scheduler.conf
	
	I1213 09:06:52.482061    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:06:52.502735    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:06:52.519106    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.523157    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:06:52.541594    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.557952    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.562286    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.581460    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:06:52.594972    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.600191    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:06:52.618621    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:06:52.641664    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:52.896546    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.462301    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.694179    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.760215    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.817909    4604 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:06:53.824127    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.324298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.823616    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.323720    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.823860    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.324648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.823338    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.323932    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.823662    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.325441    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.823290    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.324178    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.823834    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.323384    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.322728    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.825381    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.323125    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.823650    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.323054    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.823648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.323519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.822908    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.323004    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.823657    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.324223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.822603    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.322828    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.824194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.323166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.823223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.322943    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.823068    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.323743    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.823847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.325801    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.823253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.323701    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.823566    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.323096    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.822920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.323236    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.822845    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.323202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.823028    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.320733    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.823214    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.323253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.823515    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.323838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.822838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.323955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.823948    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.324026    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.823129    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.323245    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.823815    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.323343    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.823677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.323428    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.823426    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.323295    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.823766    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.323104    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.824973    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.323001    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.822856    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.323222    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.824487    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.325702    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.823423    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.324186    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.824044    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.324049    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.822878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.323296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.823313    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.322735    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.824301    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.324665    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.823915    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.323027    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.823403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.323680    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.824836    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.323334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.823224    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.324136    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.323652    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.825016    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.325354    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.824443    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.323965    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.824628    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.324070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.824202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.325124    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.823287    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.324764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.823938    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.323817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.823922    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.324123    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.824182    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.325015    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.824205    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.323091    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.823407    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.322847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.823901    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.325349    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.824694    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.323496    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.824112    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.323585    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.825519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.323663    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.824612    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.324473    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.823636    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:53.968254    4604 logs.go:282] 0 containers: []
	W1213 09:07:53.968254    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:53.971723    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:54.005821    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.005868    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:54.009997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:54.043633    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.043633    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:54.047702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:54.077692    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.077692    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:54.081464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:54.109644    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.109644    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:54.113266    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:54.141926    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.141926    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:54.145352    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:54.178100    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.178100    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:54.178100    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:54.178164    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:54.252196    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:54.252196    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:54.284935    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:54.285971    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:54.538213    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:54.538213    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:54.538213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:54.583090    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:54.583090    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:07:57.312809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:57.335927    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:57.368850    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.368850    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:57.372314    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:57.414423    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.414423    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:57.418091    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:57.445624    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.445624    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:57.450351    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:57.478804    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.478804    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:57.482347    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:57.515270    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.515270    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:57.519226    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:57.550203    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.550203    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:57.553796    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:57.581350    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.581350    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:57.581350    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:57.581350    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:57.643200    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:57.643200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:57.673988    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:57.673988    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:57.760392    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:57.760392    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:57.760392    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:57.802849    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:57.802849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:00.359379    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:00.382695    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:00.413789    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.413789    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:00.417939    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:00.446378    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.446378    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:00.449613    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:00.482176    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.482176    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:00.485918    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:00.515814    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.515814    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:00.519425    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:00.550561    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.550614    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:00.554312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:00.581925    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.582019    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:00.586945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:00.614309    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.614309    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:00.614309    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:00.614309    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:00.677303    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:00.677303    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:00.708357    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:00.708388    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:00.792820    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:00.792820    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:00.792820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:00.834035    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:00.834035    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:03.387456    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:03.409689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:03.440566    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.440566    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:03.446132    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:03.481808    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.481808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:03.484917    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:03.516053    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.516053    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:03.519249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:03.549448    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.549448    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:03.553206    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:03.580932    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.580932    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:03.585400    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:03.615096    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.615096    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:03.618691    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:03.650537    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.650537    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:03.650537    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:03.650537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:03.715560    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:03.715560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:03.745557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:03.745557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:03.830341    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:03.830341    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:03.830341    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:03.873599    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:03.873599    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.430406    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:06.454482    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:06.484232    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.484232    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:06.489209    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:06.519685    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.519685    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:06.523281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:06.552228    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.552228    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:06.556002    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:06.585247    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.585301    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:06.588771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:06.616709    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.616709    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:06.622086    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:06.649957    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.649957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:06.653592    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:06.684273    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.684273    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:06.684273    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:06.684273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:06.712577    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:06.712577    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:06.795376    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:06.795376    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:06.795898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:06.839065    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:06.839065    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.889079    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:06.889079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.455581    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:09.480052    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:09.512625    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.512625    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:09.516455    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:09.542431    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.542499    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:09.547418    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:09.577381    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.577381    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:09.581054    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:09.609734    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.609809    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:09.614960    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:09.640858    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.640858    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:09.644539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:09.673297    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.673324    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:09.676963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:09.706066    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.706097    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:09.706097    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:09.706097    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.770379    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:09.770379    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:09.800715    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:09.800715    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:09.888345    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:09.888366    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:09.888366    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:09.931503    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:09.931503    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:12.488194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:12.511945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:12.543092    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.543092    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:12.546813    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:12.575244    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.575244    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:12.579183    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:12.606211    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.606211    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:12.609921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:12.638793    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.638793    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:12.642301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:12.671214    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.671250    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:12.675013    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:12.704218    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.704218    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:12.708216    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:12.738811    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.738811    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:12.738811    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:12.738811    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:12.801161    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:12.801161    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:12.830060    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:12.831060    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:12.915147    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:12.915147    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:12.915147    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:12.956625    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:12.956625    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:15.510904    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:15.533124    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:15.562214    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.562214    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:15.565621    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:15.590955    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.591009    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:15.594833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:15.624408    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.624408    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:15.628727    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:15.659837    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.659837    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:15.663513    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:15.690393    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.690393    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:15.693797    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:15.724206    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.724206    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:15.730221    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:15.758038    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.758038    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:15.758038    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:15.758038    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:15.820934    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:15.820934    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:15.851382    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:15.851382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:15.931108    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:15.931108    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:15.931108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:15.972073    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:15.972073    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:18.529296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:18.551856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:18.582603    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.582603    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:18.586131    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:18.615914    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.615914    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:18.619071    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:18.647226    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.647314    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:18.650885    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:18.677834    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.677834    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:18.681465    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:18.710780    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.710819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:18.715047    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:18.742085    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.742085    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:18.746505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:18.773319    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.773319    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:18.773319    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:18.773374    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:18.837290    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:18.837290    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:18.866989    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:18.866989    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:18.948930    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:18.948930    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:18.948930    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:18.991657    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:18.991657    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:21.549759    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:21.572464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:21.600790    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.600818    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:21.604078    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:21.633799    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.633799    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:21.637744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:21.665485    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.665485    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:21.669376    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:21.699844    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.699844    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:21.706394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:21.735819    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.735819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:21.738827    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:21.766879    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.766879    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:21.770728    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:21.798832    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.798867    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:21.798867    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:21.798867    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:21.863860    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:21.863860    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:21.896284    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:21.896284    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:21.976382    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:21.976382    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:21.976382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:22.019285    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:22.019285    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:24.577418    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:24.603278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:24.639919    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.639919    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:24.643610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:24.669667    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.669690    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:24.672641    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:24.702942    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.702995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:24.706810    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:24.734192    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.734192    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:24.737895    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:24.769567    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.769597    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:24.773373    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:24.803190    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.803190    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:24.807117    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:24.838064    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.838064    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:24.838064    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:24.838138    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:24.901072    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:24.901072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:24.931306    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:24.931306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:25.017636    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:25.017636    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:25.017636    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:25.060810    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:25.060810    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:27.623166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:27.647045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:27.677340    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.677340    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:27.680821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:27.708576    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.708576    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:27.712514    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:27.743161    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.743161    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:27.746176    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:27.775854    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.775854    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:27.779689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:27.808373    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.808373    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:27.814962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:27.841903    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.841903    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:27.847177    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:27.876941    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.876941    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:27.876941    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:27.876941    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:27.937569    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:27.937569    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:27.967918    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:27.967918    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:28.051195    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:28.051195    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:28.051195    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:28.091557    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:28.091557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:30.648207    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:30.671041    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:30.701387    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.701387    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:30.705353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:30.736395    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.736395    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:30.740850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:30.768626    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.768704    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:30.772180    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:30.799431    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.799504    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:30.803459    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:30.831305    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.831305    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:30.835828    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:30.864498    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.864498    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:30.868346    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:30.895559    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.895559    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:30.895559    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:30.895559    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:30.960230    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:30.960230    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:30.989103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:30.989103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:31.064421    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:31.064516    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:31.064547    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:31.104938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:31.104938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:33.662266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:33.687669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:33.719674    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.719674    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:33.723494    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:33.753735    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.753735    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:33.757660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:33.785391    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.785391    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:33.789471    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:33.817747    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.817747    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:33.821119    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:33.849606    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.849635    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:33.852624    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:33.883011    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.883011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:33.886617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:33.914695    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.914695    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:33.914695    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:33.914695    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:33.977929    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:33.977929    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:34.008197    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:34.008197    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:34.087742    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:34.087742    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:34.087742    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:34.130894    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:34.130894    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:36.687878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:36.710647    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:36.741923    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.741956    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:36.745908    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:36.773011    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.773011    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:36.777059    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:36.806949    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.806949    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:36.811294    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:36.839274    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.839274    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:36.843833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:36.871615    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.871615    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:36.875410    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:36.904496    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.904496    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:36.908270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:36.937747    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.937747    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:36.937747    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:36.937747    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:37.017981    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:37.017981    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:37.018025    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:37.058111    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:37.058111    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:37.112070    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:37.112070    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:37.178407    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:37.178407    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.714817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:39.735622    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:39.767408    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.767408    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:39.771362    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:39.800883    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.800883    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:39.805233    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:39.833400    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.833400    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:39.837009    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:39.864328    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.864373    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:39.868165    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:39.895992    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.895992    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:39.899539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:39.926222    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.926294    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:39.929312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:39.957665    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.957738    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:39.957738    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:39.957738    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.986966    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:39.986966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:40.066305    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:40.066357    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:40.066357    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:40.109785    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:40.109785    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:40.157108    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:40.157134    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:42.726706    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:42.752650    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:42.783377    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.783401    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:42.786899    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:42.817139    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.817212    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:42.820862    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:42.847197    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.847268    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:42.850420    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:42.880094    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.880094    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:42.884146    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:42.913168    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.913168    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:42.916601    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:42.945059    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.945059    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:42.950263    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:42.978582    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.978603    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:42.978603    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:42.978603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:43.041879    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:43.041879    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:43.072317    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:43.072317    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:43.165917    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:43.165917    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:43.165917    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:43.207209    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:43.207209    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:45.761070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:45.783759    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:45.815346    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.815346    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:45.819219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:45.846414    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.846414    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:45.849850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:45.881303    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.881303    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:45.885203    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:45.911758    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.911758    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:45.915687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:45.946589    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.946589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:45.950051    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:45.976088    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.976088    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:45.979669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:46.011063    4604 logs.go:282] 0 containers: []
	W1213 09:08:46.011155    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:46.011155    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:46.011155    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:46.074019    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:46.075019    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:46.106619    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:46.106619    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:46.188897    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:46.188897    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:46.188897    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:46.229995    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:46.229995    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:48.789468    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:48.811354    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:48.842470    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.842470    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:48.848670    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:48.876329    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.876329    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:48.879989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:48.908565    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.908565    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:48.912255    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:48.948072    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.948072    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:48.951857    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:48.980030    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.980030    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:48.983447    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:49.016239    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.016239    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:49.022258    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:49.049950    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.049950    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:49.049950    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:49.049950    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:49.094252    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:49.094252    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:49.146427    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:49.146952    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:49.205850    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:49.205850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:49.235850    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:49.235850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:49.315580    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:51.820920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:51.843200    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:51.874270    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.874322    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:51.877687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:51.905886    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.905886    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:51.910483    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:51.937921    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.938207    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:51.942126    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:51.970152    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.970152    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:51.973777    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:52.005341    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.005341    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:52.011533    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:52.042004    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.042004    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:52.045665    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:52.073964    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.073964    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:52.073964    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:52.073964    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:52.136324    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:52.137327    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:52.167493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:52.167493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:52.247700    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:52.247700    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:52.247700    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:52.289002    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:52.289002    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:54.844809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:54.866930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:54.898229    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.898229    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:54.902031    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:54.932712    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.932712    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:54.936121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:54.963632    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.963632    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:54.967503    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:54.993576    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.993576    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:54.997842    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:55.025663    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.025663    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:55.029428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:55.057141    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.057141    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:55.061017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:55.089820    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.089820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:55.089820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:55.089820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:55.153977    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:55.154001    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:55.215966    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:55.215966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:55.244751    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:55.244751    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:55.322925    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:55.322925    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:55.322925    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:57.870018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:57.892445    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:57.923189    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.923189    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:57.926680    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:57.956979    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.956979    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:57.960468    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:57.989714    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.989714    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:57.994672    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:58.021349    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.021349    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:58.024912    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:58.053594    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.053594    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:58.057186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:58.086247    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.086247    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:58.089444    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:58.117375    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.117375    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:58.117375    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:58.117375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:58.159414    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:58.159414    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:58.213441    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:58.213441    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:58.275646    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:58.275646    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:58.307733    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:58.307733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:58.393941    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:00.900693    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:00.925586    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:00.954130    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.954130    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:00.957383    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:00.984796    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.984826    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:00.988339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:01.013943    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.013943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:01.017466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:01.045614    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.045614    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:01.049219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:01.077719    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.077719    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:01.083105    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:01.114373    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.114373    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:01.118034    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:01.145171    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.145171    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:01.145171    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:01.145171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:01.227391    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:01.227391    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:01.227391    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:01.266324    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:01.266324    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:01.318698    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:01.318698    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:01.379640    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:01.379640    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:03.917253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:03.941711    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:03.969911    4604 logs.go:282] 0 containers: []
	W1213 09:09:03.969911    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:03.973403    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:04.002458    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.002458    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:04.006090    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:04.034145    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.034145    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:04.037736    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:04.063991    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.063991    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:04.066963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:04.096807    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.096807    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:04.100249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:04.128437    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.128437    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:04.132074    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:04.160225    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.160225    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:04.160225    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:04.160225    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:04.222581    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:04.222581    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:04.251920    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:04.251920    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:04.333622    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:04.333622    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:04.333622    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:04.373214    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:04.373214    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:06.935527    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:06.958474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:06.990564    4604 logs.go:282] 0 containers: []
	W1213 09:09:06.990564    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:06.994406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:07.025506    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.025506    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:07.029905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:07.060066    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.060066    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:07.063610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:07.091922    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.092007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:07.095595    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:07.124460    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.124496    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:07.128147    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:07.157131    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.157131    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:07.160743    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:07.191500    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.191500    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:07.191500    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:07.191500    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:07.242194    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:07.242273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:07.302067    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:07.302067    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:07.333088    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:07.333088    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:07.415000    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:07.415000    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:07.415000    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:09.963522    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:09.986505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:10.023010    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.023010    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:10.026202    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:10.057866    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.057945    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:10.061802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:10.089523    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.089523    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:10.092989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:10.124941    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.124941    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:10.128882    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:10.157336    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.157336    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:10.160838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:10.186957    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.186957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:10.190881    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:10.219557    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.219557    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:10.219557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:10.219557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:10.298159    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:10.298159    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:10.298159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:10.338779    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:10.338779    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:10.385337    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:10.385337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:10.445911    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:10.445911    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:12.983669    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:13.005971    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:13.038383    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.038383    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:13.041755    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:13.071860    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.071860    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:13.075101    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:13.104117    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.104198    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:13.107582    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:13.137511    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.137511    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:13.142951    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:13.170239    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.170239    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:13.174246    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:13.204251    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.204251    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:13.207747    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:13.235835    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.235835    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:13.235835    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:13.235835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:13.299873    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:13.300878    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:13.331103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:13.331103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:13.409680    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:13.409714    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:13.409714    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:13.454882    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:13.454882    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.009703    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:16.033721    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:16.065430    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.065430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:16.069567    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:16.096385    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.096459    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:16.099989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:16.127782    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.127782    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:16.130994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:16.161401    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.161401    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:16.165139    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:16.193589    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.193589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:16.197319    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:16.226572    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.226607    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:16.230538    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:16.257820    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.257820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:16.257820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:16.257820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.308467    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:16.308467    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:16.371370    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:16.371370    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:16.400835    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:16.400835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:16.485671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:16.485701    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:16.485701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.036505    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:19.061114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:19.095852    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.095852    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:19.099353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:19.131781    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.131781    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:19.134812    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:19.165823    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.165823    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:19.169019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:19.198392    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.198392    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:19.203290    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:19.233051    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.233051    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:19.237259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:19.263869    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.263869    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:19.268019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:19.296220    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.296220    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:19.296220    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:19.296220    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:19.359981    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:19.359981    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:19.391692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:19.391692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:19.476176    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:19.476176    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:19.476176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.518567    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:19.518567    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:22.072334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:22.095545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:22.126659    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.126690    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:22.130501    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:22.160329    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.160363    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:22.164108    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:22.193702    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.193732    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:22.196904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:22.225415    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.225415    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:22.228719    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:22.258896    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.258896    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:22.262806    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:22.289609    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.289609    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:22.293253    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:22.323681    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.323681    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:22.323681    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:22.323681    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:22.386923    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:22.386923    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:22.416353    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:22.416353    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:22.498735    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:22.498735    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:22.498735    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:22.550754    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:22.550754    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.111955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:25.134114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:25.160988    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.160988    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:25.164339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:25.195249    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.195249    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:25.198638    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:25.225490    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.225490    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:25.231098    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:25.257691    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.257691    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:25.261515    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:25.287683    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.287683    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:25.293213    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:25.319319    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.319319    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:25.322958    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:25.354108    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.354108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:25.354198    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:25.354198    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:25.397011    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:25.397011    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.455292    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:25.455292    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:25.517423    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:25.517423    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:25.546322    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:25.546322    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:25.627826    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.133991    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:28.156525    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:28.184733    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.184733    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:28.188704    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:28.216710    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.216710    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:28.220744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:28.249082    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.249082    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:28.252646    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:28.284289    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.284289    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:28.288332    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:28.314796    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.314796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:28.321406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:28.350295    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.350295    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:28.353850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:28.382048    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.382048    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:28.382048    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:28.382048    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:28.444457    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:28.444457    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:28.475310    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:28.475337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:28.562628    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.562628    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:28.562628    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:28.605307    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:28.605307    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:31.165266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:31.186966    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:31.222005    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.222066    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:31.225186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:31.256308    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.256308    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:31.260088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:31.287293    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.287293    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:31.290982    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:31.319241    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.319241    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:31.322581    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:31.350058    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.350128    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:31.353584    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:31.380173    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.380212    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:31.384070    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:31.411239    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.411239    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:31.411239    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:31.411239    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:31.477283    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:31.477283    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:31.507500    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:31.508020    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:31.597314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:31.597314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:31.597314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:31.635938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:31.635938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.189996    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:34.212398    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:34.238809    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.238809    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:34.242256    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:34.270112    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.270112    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:34.273875    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:34.303456    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.303456    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:34.307522    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:34.338016    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.338016    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:34.341872    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:34.368952    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.368952    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:34.374198    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:34.405261    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.405261    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:34.408381    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:34.435072    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.435072    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:34.435072    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:34.435072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:34.515381    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:34.515381    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:34.515381    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:34.573241    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:34.573241    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.623650    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:34.624178    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:34.682935    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:34.682935    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:37.219569    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:37.242545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:37.272082    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.272082    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:37.275835    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:37.304181    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.304181    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:37.307884    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:37.335943    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.335943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:37.339864    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:37.377566    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.377566    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:37.382018    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:37.412404    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.412404    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:37.416038    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:37.442722    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.442722    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:37.446771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:37.474398    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.474398    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:37.474398    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:37.474398    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:37.577898    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:37.577898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:37.577898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:37.620560    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:37.620560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:37.669632    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:37.669632    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:37.734142    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:37.734142    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.271884    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:40.294824    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:40.321888    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.321888    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:40.325505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:40.353723    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.353808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:40.357193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:40.386522    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.386522    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:40.391186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:40.418547    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.418547    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:40.425278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:40.455783    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.455783    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:40.459890    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:40.489966    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.489966    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:40.493703    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:40.538181    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.538181    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:40.538253    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:40.538253    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:40.601826    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:40.601826    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.631898    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:40.631898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:40.713071    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:40.713071    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:40.713071    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:40.755270    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:40.755270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.309018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:43.331107    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:43.365765    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.365765    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:43.369683    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:43.396582    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.396582    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:43.400512    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:43.429185    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.429185    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:43.432708    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:43.463128    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.463128    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:43.466133    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:43.496082    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.496082    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:43.500151    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:43.537578    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.537578    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:43.541441    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:43.569477    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.569477    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:43.569477    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:43.569521    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.620575    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:43.620575    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:43.681515    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:43.681515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:43.710447    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:43.710447    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:43.793119    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:43.793119    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:43.793119    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.339779    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:46.362296    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:46.391878    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.391878    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:46.395830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:46.424203    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.424203    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:46.427838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:46.456024    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.456024    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:46.460057    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:46.488187    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.488187    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:46.493831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:46.533872    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.533872    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:46.540390    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:46.568011    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.568011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:46.571702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:46.602586    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.602653    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:46.602653    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:46.602653    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:46.662280    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:46.662280    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:46.693557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:46.693557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:46.782210    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:46.782210    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:46.782210    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.823701    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:46.823701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.384298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:49.407707    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:49.438420    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.438420    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:49.442231    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:49.470770    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.470770    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:49.473919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:49.504515    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.504546    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:49.508487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:49.547082    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.547082    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:49.551548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:49.578796    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.578796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:49.582281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:49.608530    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.608530    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:49.611741    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:49.639231    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.639231    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:49.639231    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:49.639231    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.689389    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:49.689389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:49.753229    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:49.753229    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:49.783294    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:49.783294    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:49.864270    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:49.864270    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:49.864270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.412975    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:52.439979    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:52.475193    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.475193    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:52.479114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:52.510741    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.510741    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:52.514487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:52.557360    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.557360    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:52.561448    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:52.588077    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.588077    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:52.591539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:52.621182    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.621182    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:52.624734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:52.650838    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.650838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:52.655565    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:52.686451    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.686451    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:52.686451    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:52.686528    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:52.747788    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:52.747788    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:52.781834    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:52.782825    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:52.860287    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:52.860362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:52.860362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.905051    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:52.905051    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.461925    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:55.484035    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:55.517116    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.517116    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:55.522844    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:55.553488    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.553488    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:55.557370    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:55.589995    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.589995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:55.595259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:55.622638    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.622707    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:55.626066    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:55.652752    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.652752    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:55.657065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:55.685386    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.685407    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:55.689428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:55.717051    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.717051    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:55.717051    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:55.717120    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:55.758337    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:55.758337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.822375    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:55.822375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:55.885080    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:55.885080    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:55.917741    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:55.917741    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:55.995300    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.500574    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:58.521337    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:58.548629    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.548629    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:58.551546    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:58.581100    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.581100    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:58.586220    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:58.613906    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.613906    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:58.617469    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:58.644238    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.644292    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:58.648344    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:58.678031    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.678031    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:58.681474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:58.707025    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.707025    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:58.710542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:58.742746    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.742770    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:58.742770    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:58.742770    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:58.805849    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:58.805849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:58.837389    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:58.837389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:58.917868    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.917899    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:58.917899    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:58.959951    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:58.959951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:01.514466    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:01.535932    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:01.567037    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.567037    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:01.571145    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:01.595775    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.595775    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:01.599771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:01.629170    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.629170    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:01.632128    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:01.662382    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.662382    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:01.665517    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:01.693368    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.693368    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:01.696830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:01.724611    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.724611    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:01.728207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:01.755432    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.755432    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:01.755432    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:01.755432    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:01.821399    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:01.821399    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:01.852579    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:01.853099    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:01.934160    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:01.934160    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:01.934160    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:01.976648    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:01.976648    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:04.534486    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:04.556301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:04.587516    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.587516    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:04.591921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:04.621299    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.621371    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:04.625334    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:04.653954    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.653954    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:04.657436    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:04.686845    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.686845    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:04.690201    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:04.718702    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.718702    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:04.722366    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:04.750970    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.750970    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:04.754283    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:04.783682    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.783682    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:04.783682    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:04.783682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:04.844699    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:04.844699    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:04.875813    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:04.875813    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:04.953200    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:04.953200    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:04.953200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:04.993306    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:04.993306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:07.543188    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:07.566411    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:07.596022    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.596022    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:07.599737    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:07.627899    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.627899    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:07.631860    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:07.661281    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.661281    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:07.665185    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:07.695914    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.695914    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:07.699555    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:07.732011    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.732058    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:07.736521    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:07.769602    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.769602    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:07.773486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:07.802107    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.802107    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:07.802107    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:07.802107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:07.864516    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:07.864516    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:07.896513    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:07.896513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:07.973085    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:07.973085    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:07.973085    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:08.014869    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:08.014869    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:10.570544    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:10.592396    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:10.624974    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.624974    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:10.629502    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:10.657201    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.657201    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:10.660591    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:10.687563    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.687563    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:10.691289    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:10.721420    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.721420    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:10.724919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:10.752211    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.752211    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:10.755905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:10.784215    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.784215    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:10.788207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:10.816951    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.816951    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:10.816951    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:10.816951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:10.879172    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:10.879172    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:10.908202    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:10.908202    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:10.986325    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:10.986325    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:10.986325    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:11.027515    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:11.027515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:13.588427    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:13.611368    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:13.644873    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.644873    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:13.648808    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:13.677881    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.677942    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:13.682617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:13.712870    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.712870    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:13.716696    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:13.744007    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.744007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:13.748548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:13.777967    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.778011    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:13.781321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:13.809271    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.809271    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:13.813285    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:13.840555    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.840555    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:13.840555    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:13.840555    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:13.904251    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:13.904251    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:13.935133    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:13.935133    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:14.016449    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:14.016449    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:14.016449    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:14.057706    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:14.057706    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:16.615756    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:16.638088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:16.670041    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.670041    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:16.673924    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:16.704163    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.704163    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:16.710097    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:16.740700    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.740700    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:16.744219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:16.771219    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.771219    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:16.774904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:16.804658    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.804658    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:16.808110    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:16.837026    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.837026    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:16.840957    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:16.869149    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.869149    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:16.869149    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:16.869149    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:16.933545    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:16.933545    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:16.964296    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:16.964296    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:17.040603    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:17.040603    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:17.040603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:17.083647    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:17.083647    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.650764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:19.674143    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:19.702643    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.702643    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:19.707045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:19.734166    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.734166    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:19.738121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:19.767856    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.767856    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:19.771207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:19.801742    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.801819    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:19.805222    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:19.833321    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.833321    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:19.836856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:19.863434    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.863465    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:19.867234    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:19.897054    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.897054    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:19.897054    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:19.897054    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.946805    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:19.946805    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:20.007213    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:20.007213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:20.036248    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:20.036248    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:20.114272    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:20.114272    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:20.114272    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:22.659210    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:22.681874    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:22.711856    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.711856    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:22.715662    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:22.744003    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.744003    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:22.748080    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:22.778409    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.778409    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:22.781997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:22.809533    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.809557    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:22.812700    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:22.842593    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.842593    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:22.846788    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:22.874683    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.874683    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:22.878045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:22.906027    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.906027    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:22.906088    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:22.906107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:22.970513    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:22.970513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:23.000755    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:23.000755    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:23.084733    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:23.084733    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:23.084733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:23.127257    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:23.127257    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.686782    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:25.709380    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:25.738484    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.738484    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:25.742065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:25.770152    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.770152    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:25.774113    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:25.803290    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.803290    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:25.807361    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:25.834734    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.834734    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:25.838734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:25.865666    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.865666    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:25.869046    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:25.896838    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.896838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:25.900312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:25.930732    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.930732    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:25.930732    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:25.930732    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.980958    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:25.980958    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:26.041855    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:26.041855    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:26.073493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:26.073493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:26.159584    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:26.159584    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:26.159584    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:28.707550    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:28.729858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:28.759846    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.759846    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:28.763596    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:28.794012    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.794012    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:28.797789    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:28.826515    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.826515    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:28.829640    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:28.861520    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.861520    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:28.864944    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:28.893275    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.893303    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:28.896907    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:28.923381    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.923381    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:28.928293    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:28.960491    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.960491    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:28.960491    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:28.960491    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:29.022787    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:29.022787    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:29.053784    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:29.053784    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:29.136856    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:29.136898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:29.136898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:29.179176    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:29.179176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:31.733518    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:31.756802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:31.790216    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.790216    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:31.793805    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:31.824397    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.824397    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:31.829526    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:31.857889    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.857889    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:31.861193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:31.890304    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.890304    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:31.893795    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:31.921856    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.921927    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:31.924962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:31.953806    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.953837    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:31.957466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:31.987829    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.987829    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:31.987829    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:31.987829    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:32.034063    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:32.034063    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:32.096079    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:32.096079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:32.126955    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:32.126955    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:32.209100    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:32.209100    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:32.209100    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:34.755896    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:34.779017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:34.808294    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.808366    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:34.811869    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:34.839872    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.839938    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:34.843685    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:34.871636    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.871636    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:34.875660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:34.903443    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.903443    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:34.907770    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:34.935581    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.935581    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:34.939767    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:34.969814    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.969814    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:34.973317    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:35.003474    4604 logs.go:282] 0 containers: []
	W1213 09:10:35.003474    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:35.003474    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:35.003537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:35.066261    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:35.066261    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:35.097692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:35.097692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:35.180207    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:35.180207    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:35.180207    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:35.223159    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:35.223159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:37.780314    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:37.804001    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:37.835430    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.835430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:37.839042    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:37.867680    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.867699    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:37.870898    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:37.902798    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.902798    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:37.906542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:37.934985    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.935050    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:37.938192    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:37.969111    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.969111    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:37.972848    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:38.002751    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.002751    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:38.006552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:38.035033    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.035033    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:38.035033    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:38.035033    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:38.086087    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:38.086611    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:38.147832    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:38.147832    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:38.180233    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:38.180233    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:38.261008    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:38.261008    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:38.261008    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:40.811191    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:40.833394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:40.865083    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.865083    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:40.868858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:40.900204    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.900204    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:40.903500    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:40.930103    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.930103    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:40.933495    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:40.960744    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.960744    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:40.964475    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:40.990935    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.990935    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:40.995048    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:41.022706    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.022706    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:41.026451    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:41.056906    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.056906    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:41.056906    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:41.056906    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:41.115470    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:41.115470    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:41.143967    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:41.143967    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:41.232682    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:41.232682    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:41.232682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:41.274641    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:41.274641    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:43.828677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:43.852994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:43.886713    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.886713    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:43.890625    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:43.919501    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.919501    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:43.923426    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:43.951987    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.951987    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:43.955937    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:43.985130    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.985130    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:43.988484    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:44.018258    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.018258    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:44.022302    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:44.050666    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.050666    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:44.054876    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:44.085108    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.085108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:44.085108    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:44.085108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:44.112809    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:44.112809    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:44.193362    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:44.193362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:44.193362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:44.237334    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:44.237334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:44.289034    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:44.289034    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:46.855055    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:46.878443    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:46.909614    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.909614    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:46.916327    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:46.944603    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.944603    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:46.948050    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:46.976487    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.976487    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:46.980498    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:47.008131    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.008131    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:47.011552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:47.039887    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.039887    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:47.043570    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:47.072161    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.072161    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:47.075765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:47.105843    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.105843    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:47.105843    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:47.105843    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:47.168444    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:47.168444    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:47.198734    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:47.198734    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:47.280671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:47.280671    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:47.280671    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:47.322808    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:47.322808    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:49.882724    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:49.904378    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:49.936667    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.936667    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:49.939740    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:49.973628    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.973628    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:49.977831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:50.008373    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.008452    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:50.013016    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:50.043104    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.043104    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:50.046855    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:50.078353    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.078353    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:50.082270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:50.113856    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.113856    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:50.118930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:50.148208    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.148208    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:50.148208    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:50.148208    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:50.214697    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:50.214697    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:50.243820    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:50.243820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:50.331549    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:50.331549    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:50.331549    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:50.372171    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:50.372171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:52.928403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:52.950923    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:52.979279    4604 logs.go:282] 0 containers: []
	W1213 09:10:52.979307    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:52.982821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:53.012984    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.013051    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:53.016321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:53.046839    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.046839    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:53.051164    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:53.080161    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.080161    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:53.083793    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:53.117152    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.117152    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:53.120486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:53.150543    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.150543    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:53.154171    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:53.184334    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.184334    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:53.184334    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:53.184334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:53.228630    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:53.228630    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:53.282521    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:53.282558    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:53.346952    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:53.346991    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:53.373976    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:53.373976    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:53.455812    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:55.961126    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:55.980524    4604 kubeadm.go:602] duration metric: took 4m3.6754433s to restartPrimaryControlPlane
	W1213 09:10:55.980524    4604 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 09:10:55.985356    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:10:56.635426    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:56.658380    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:10:56.677797    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:10:56.682473    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:10:56.699107    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:10:56.699107    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:10:56.703291    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:10:56.719044    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:10:56.723277    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:10:56.742780    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:10:56.756514    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:10:56.760505    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:10:56.780196    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.793888    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:10:56.798332    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.817764    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:10:56.829936    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:10:56.833707    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:10:56.849696    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:10:56.965661    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:10:57.051298    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:10:57.163109    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:14:58.077510    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:14:58.077510    4604 kubeadm.go:319] 
	I1213 09:14:58.077700    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:14:58.082513    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:14:58.082513    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:14:58.083105    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:14:58.083105    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:14:58.083630    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:14:58.084184    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:14:58.084411    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:14:58.084511    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:14:58.084637    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:14:58.084788    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:14:58.084950    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:14:58.085561    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:14:58.085629    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:14:58.085787    4604 kubeadm.go:319] OS: Linux
	I1213 09:14:58.085905    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:14:58.085994    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:14:58.086095    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:14:58.086249    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:14:58.086375    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:14:58.086436    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:14:58.086559    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:14:58.086680    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:14:58.086776    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:14:58.087006    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:14:58.087282    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:14:58.091333    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:14:58.091861    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:14:58.092898    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:14:58.096150    4604 out.go:252]   - Booting up control plane ...
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:14:58.097140    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00081318s
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.097140    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.098169    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 
	W1213 09:14:58.098169    4604 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00081318s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:14:58.103247    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:14:58.557280    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:14:58.576227    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:14:58.580590    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:14:58.591916    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:14:58.591916    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:14:58.597377    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:14:58.611245    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:14:58.615321    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:14:58.633996    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:14:58.647865    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:14:58.651889    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:14:58.669442    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.682787    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:14:58.687832    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.708348    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:14:58.722058    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:14:58.727337    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:14:58.747003    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:14:58.861078    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:14:58.943511    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:14:59.043878    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:18:59.702905    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:18:59.702984    4604 kubeadm.go:319] 
	I1213 09:18:59.703100    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:18:59.706956    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:18:59.706956    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:18:59.708169    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:18:59.708169    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:18:59.708812    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:18:59.709865    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:18:59.710067    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:18:59.710115    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:18:59.710268    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:18:59.710360    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:18:59.710543    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:18:59.710612    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:18:59.710694    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:18:59.710783    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] OS: Linux
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:18:59.711409    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:18:59.711492    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:18:59.711623    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:18:59.711691    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:18:59.711874    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:18:59.712056    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:18:59.712280    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:18:59.712416    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:18:59.717830    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:18:59.722958    4604 out.go:252]   - Booting up control plane ...
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:18:59.723960    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001708609s
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.725960    4604 kubeadm.go:403] duration metric: took 12m7.4678993s to StartCluster
	I1213 09:18:59.725960    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:18:59.729959    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:18:59.791539    4604 cri.go:89] found id: ""
	I1213 09:18:59.791620    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.791620    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:18:59.791620    4604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 09:18:59.796126    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:18:59.838188    4604 cri.go:89] found id: ""
	I1213 09:18:59.838188    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.838188    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:18:59.838188    4604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 09:18:59.842219    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:18:59.886873    4604 cri.go:89] found id: ""
	I1213 09:18:59.886928    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.886928    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:18:59.886959    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:18:59.891184    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:18:59.935247    4604 cri.go:89] found id: ""
	I1213 09:18:59.935247    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.935247    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:18:59.935247    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:18:59.940658    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:18:59.979678    4604 cri.go:89] found id: ""
	I1213 09:18:59.979678    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.979678    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:18:59.979678    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:18:59.984360    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:19:00.029429    4604 cri.go:89] found id: ""
	I1213 09:19:00.029429    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.029429    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:19:00.029429    4604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 09:19:00.034206    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:19:00.078417    4604 cri.go:89] found id: ""
	I1213 09:19:00.078417    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.078417    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:19:00.078417    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:19:00.078417    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:19:00.158314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:19:00.158314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:19:00.158314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:19:00.200907    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:19:00.201904    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:19:00.251291    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:19:00.251291    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:19:00.314330    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:19:00.314330    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 09:19:00.346177    4604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:19:00.346280    4604 out.go:285] * 
	W1213 09:19:00.346392    4604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.346427    4604 out.go:285] * 
	W1213 09:19:00.348597    4604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:19:00.354189    4604 out.go:203] 
	W1213 09:19:00.361975    4604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.362101    4604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:19:00.362101    4604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:19:00.368166    4604 out.go:203] 
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:19:02.273724   40764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:02.274632   40764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:02.277028   40764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:02.278211   40764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:02.279738   40764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:19:02 up 55 min,  0 user,  load average: 0.35, 0.35, 0.44
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:18:58 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:18:59 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 09:18:59 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:18:59 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:18:59 functional-482100 kubelet[40495]: E1213 09:18:59.743445   40495 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:18:59 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:18:59 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:00 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 09:19:00 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:00 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:00 functional-482100 kubelet[40619]: E1213 09:19:00.512946   40619 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:00 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:00 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 09:19:01 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:01 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:01 functional-482100 kubelet[40648]: E1213 09:19:01.231389   40648 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 09:19:01 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:01 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:01 functional-482100 kubelet[40678]: E1213 09:19:01.995207   40678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:01 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (558.1854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (741.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (54.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-482100 get po -l tier=control-plane -n kube-system -o=json
E1213 09:19:46.015583    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-482100 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.3506983s)

                                                
                                                
** stderr ** 
	E1213 09:19:14.168616    9744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:19:24.252513    9744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:19:34.292397    9744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:19:44.333375    9744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:19:54.382117    9744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-482100 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (582.0754ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.8370157s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-213400 image ls --format yaml --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ ssh     │ functional-213400 ssh pgrep buildkitd                                                                                   │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │                     │
	│ image   │ functional-213400 image ls --format json --alsologtostderr                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr                  │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls --format table --alsologtostderr                                                             │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ image   │ functional-213400 image ls                                                                                              │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:43 UTC │ 13 Dec 25 08:43 UTC │
	│ delete  │ -p functional-213400                                                                                                    │ functional-213400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │ 13 Dec 25 08:48 UTC │
	│ start   │ -p functional-482100 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:48 UTC │                     │
	│ start   │ -p functional-482100 --alsologtostderr -v=8                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:57 UTC │                     │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.1                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:3.3                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add registry.k8s.io/pause:latest                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache add minikube-local-cache-test:functional-482100                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ functional-482100 cache delete minikube-local-cache-test:functional-482100                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl images                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ cache   │ functional-482100 cache reload                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ ssh     │ functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │ 13 Dec 25 09:04 UTC │
	│ kubectl │ functional-482100 kubectl -- --context functional-482100 get pods                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:04 UTC │                     │
	│ start   │ -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:06:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:06:42.717723    4604 out.go:360] Setting OutFile to fd 964 ...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.759720    4604 out.go:374] Setting ErrFile to fd 1684...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.775684    4604 out.go:368] Setting JSON to false
	I1213 09:06:42.778565    4604 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2610,"bootTime":1765614192,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:06:42.778565    4604 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:06:42.783192    4604 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:06:42.786187    4604 notify.go:221] Checking for updates...
	I1213 09:06:42.786345    4604 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:06:42.788643    4604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:06:42.791579    4604 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:06:42.793982    4604 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:06:42.796424    4604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:06:42.798851    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:42.799423    4604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:06:42.991260    4604 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:06:42.994416    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.223298    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.202416057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.228742    4604 out.go:179] * Using the docker driver based on existing profile
	I1213 09:06:43.237191    4604 start.go:309] selected driver: docker
	I1213 09:06:43.237191    4604 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.238191    4604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:06:43.244191    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.469724    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.451401286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.566702    4604 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:06:43.567247    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:43.567332    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:43.567332    4604 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.571338    4604 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 09:06:43.574242    4604 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 09:06:43.576258    4604 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:06:43.580317    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:43.580377    4604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:06:43.580526    4604 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 09:06:43.580526    4604 cache.go:65] Caching tarball of preloaded images
	I1213 09:06:43.580984    4604 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:06:43.581085    4604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 09:06:43.581294    4604 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 09:06:43.661395    4604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:06:43.661446    4604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:06:43.661502    4604 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:06:43.661597    4604 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:06:43.661744    4604 start.go:364] duration metric: took 97.5µs to acquireMachinesLock for "functional-482100"
	I1213 09:06:43.661894    4604 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:06:43.661968    4604 fix.go:54] fixHost starting: 
	I1213 09:06:43.668789    4604 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 09:06:43.726255    4604 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 09:06:43.726255    4604 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:06:43.729251    4604 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 09:06:43.729251    4604 machine.go:94] provisionDockerMachine start ...
	I1213 09:06:43.733252    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:43.788369    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:43.788946    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:43.788946    4604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:06:43.970841    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:43.970841    4604 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 09:06:43.974885    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.031548    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.032011    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.032011    4604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 09:06:44.226185    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:44.230480    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.283942    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.284648    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.284648    4604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:06:44.459239    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:44.459239    4604 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 09:06:44.459239    4604 ubuntu.go:190] setting up certificates
	I1213 09:06:44.459239    4604 provision.go:84] configureAuth start
	I1213 09:06:44.464098    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:44.517408    4604 provision.go:143] copyHostCerts
	I1213 09:06:44.518409    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 09:06:44.518409    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 09:06:44.518409    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 09:06:44.519524    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 09:06:44.519524    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 09:06:44.519524    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 09:06:44.520761    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 09:06:44.520761    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 09:06:44.520761    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 09:06:44.521333    4604 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 09:06:44.683862    4604 provision.go:177] copyRemoteCerts
	I1213 09:06:44.688852    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:06:44.691943    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.744886    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:44.879038    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:06:44.911005    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:06:44.941373    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:06:44.969809    4604 provision.go:87] duration metric: took 510.5655ms to configureAuth
	I1213 09:06:44.969809    4604 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:06:44.970648    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:44.974094    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.031966    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.032404    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.032404    4604 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 09:06:45.211091    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 09:06:45.211091    4604 ubuntu.go:71] root file system type: overlay
	I1213 09:06:45.211091    4604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 09:06:45.214999    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.278005    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.278423    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.278519    4604 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 09:06:45.475276    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 09:06:45.478711    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.533172    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.533745    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.533745    4604 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 09:06:45.728810    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:45.728810    4604 machine.go:97] duration metric: took 1.999543s to provisionDockerMachine
	I1213 09:06:45.728810    4604 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 09:06:45.728810    4604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:06:45.732939    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:06:45.736061    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.792193    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:45.929940    4604 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:06:45.938024    4604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:06:45.938024    4604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:06:45.938024    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 09:06:45.940034    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 09:06:45.944509    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 09:06:45.956570    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 09:06:45.988344    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 09:06:46.020180    4604 start.go:296] duration metric: took 291.3676ms for postStartSetup
	I1213 09:06:46.024635    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:06:46.027628    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.080253    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.215093    4604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:06:46.224875    4604 fix.go:56] duration metric: took 2.5628868s for fixHost
	I1213 09:06:46.224875    4604 start.go:83] releasing machines lock for "functional-482100", held for 2.5631106s
	I1213 09:06:46.227979    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:46.281460    4604 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 09:06:46.284589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.284589    4604 ssh_runner.go:195] Run: cat /version.json
	I1213 09:06:46.287589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.339381    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.341884    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	W1213 09:06:46.471031    4604 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 09:06:46.475772    4604 ssh_runner.go:195] Run: systemctl --version
	I1213 09:06:46.491471    4604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:06:46.501246    4604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:06:46.506902    4604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:06:46.521536    4604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:06:46.521536    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:46.521536    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:46.521536    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:46.547922    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:06:46.569619    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:06:46.584943    4604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:06:46.588980    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1213 09:06:46.598267    4604 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 09:06:46.598267    4604 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 09:06:46.612904    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.631660    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:06:46.651016    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.672904    4604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:06:46.691930    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:06:46.710477    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:06:46.730250    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:06:46.750913    4604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:06:46.770554    4604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:06:46.792378    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.034402    4604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:06:47.276302    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:47.276363    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:47.280722    4604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 09:06:47.305066    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.327135    4604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:06:47.404977    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.431107    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:06:47.450015    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:47.478646    4604 ssh_runner.go:195] Run: which cri-dockerd
	I1213 09:06:47.491243    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 09:06:47.503124    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 09:06:47.527239    4604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 09:06:47.667767    4604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 09:06:47.799062    4604 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 09:06:47.799062    4604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 09:06:47.826470    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 09:06:47.848448    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.994955    4604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 09:06:48.954293    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:06:48.976829    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 09:06:49.001926    4604 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 09:06:49.028432    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.050748    4604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 09:06:49.205807    4604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 09:06:49.342941    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.483831    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 09:06:49.508934    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 09:06:49.531916    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.703017    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 09:06:49.814910    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.832973    4604 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 09:06:49.837568    4604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 09:06:49.846585    4604 start.go:564] Will wait 60s for crictl version
	I1213 09:06:49.850486    4604 ssh_runner.go:195] Run: which crictl
	I1213 09:06:49.861564    4604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:06:49.905261    4604 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 09:06:49.909293    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.949851    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.999228    4604 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 09:06:50.003267    4604 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 09:06:50.178404    4604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 09:06:50.184053    4604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 09:06:50.194897    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:50.254370    4604 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 09:06:50.256155    4604 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:06:50.256766    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:50.259593    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.291635    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.291635    4604 docker.go:621] Images already preloaded, skipping extraction
	I1213 09:06:50.295568    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.325004    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.325004    4604 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:06:50.325004    4604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 09:06:50.325004    4604 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:06:50.328257    4604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 09:06:50.622080    4604 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 09:06:50.622145    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:50.622145    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:50.622208    4604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:06:50.622208    4604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:06:50.622373    4604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:06:50.626372    4604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:06:50.640912    4604 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:06:50.644769    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:06:50.657199    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 09:06:50.677193    4604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:06:50.697253    4604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1213 09:06:50.723871    4604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:06:50.735113    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:50.895085    4604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:06:51.205789    4604 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 09:06:51.205789    4604 certs.go:195] generating shared ca certs ...
	I1213 09:06:51.205789    4604 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:06:51.206694    4604 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 09:06:51.206931    4604 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 09:06:51.207202    4604 certs.go:257] generating profile certs ...
	I1213 09:06:51.207247    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 09:06:51.208796    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 09:06:51.208796    4604 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 09:06:51.209325    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 09:06:51.210415    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 09:06:51.211988    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:06:51.241166    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:06:51.270190    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:06:51.305732    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:06:51.336212    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:06:51.365643    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:06:51.395250    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:06:51.426424    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:06:51.456416    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:06:51.485568    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 09:06:51.513607    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 09:06:51.544659    4604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:06:51.569245    4604 ssh_runner.go:195] Run: openssl version
	I1213 09:06:51.589082    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.610612    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:06:51.632111    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.640287    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.644860    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.695068    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:06:51.712089    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.730159    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 09:06:51.750455    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.759490    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.764057    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.813702    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:06:51.830987    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.848737    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 09:06:51.866735    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.874087    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.878230    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.926970    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:06:51.943705    4604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:06:51.956247    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:06:52.006902    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:06:52.056817    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:06:52.106649    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:06:52.159409    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:06:52.206463    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:06:52.251679    4604 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:52.256595    4604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.289711    4604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:06:52.303076    4604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:06:52.303076    4604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:06:52.307600    4604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:06:52.319493    4604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.323244    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:52.375973    4604 kubeconfig.go:125] found "functional-482100" server: "https://127.0.0.1:63845"
	I1213 09:06:52.384564    4604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:06:52.400436    4604 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:49:19.464397186 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 09:06:50.708121923 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 09:06:52.400436    4604 kubeadm.go:1161] stopping kube-system containers ...
	I1213 09:06:52.404765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.439058    4604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 09:06:52.463926    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:06:52.476815    4604 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 08:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:53 /etc/kubernetes/scheduler.conf
	
	I1213 09:06:52.482061    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:06:52.502735    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:06:52.519106    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.523157    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:06:52.541594    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.557952    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.562286    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.581460    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:06:52.594972    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.600191    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:06:52.618621    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:06:52.641664    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:52.896546    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.462301    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.694179    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.760215    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.817909    4604 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:06:53.824127    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.324298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.823616    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.323720    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.823860    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.324648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.823338    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.323932    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.823662    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.325441    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.823290    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.324178    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.823834    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.323384    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.322728    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.825381    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.323125    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.823650    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.323054    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.823648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.323519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.822908    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.323004    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.823657    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.324223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.822603    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.322828    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.824194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.323166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.823223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.322943    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.823068    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.323743    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.823847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.325801    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.823253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.323701    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.823566    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.323096    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.822920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.323236    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.822845    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.323202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.823028    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.320733    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.823214    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.323253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.823515    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.323838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.822838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.323955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.823948    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.324026    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.823129    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.323245    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.823815    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.323343    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.823677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.323428    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.823426    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.323295    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.823766    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.323104    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.824973    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.323001    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.822856    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.323222    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.824487    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.325702    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.823423    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.324186    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.824044    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.324049    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.822878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.323296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.823313    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.322735    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.824301    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.324665    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.823915    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.323027    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.823403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.323680    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.824836    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.323334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.823224    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.324136    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.323652    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.825016    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.325354    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.824443    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.323965    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.824628    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.324070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.824202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.325124    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.823287    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.324764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.823938    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.323817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.823922    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.324123    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.824182    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.325015    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.824205    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.323091    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.823407    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.322847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.823901    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.325349    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.824694    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.323496    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.824112    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.323585    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.825519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.323663    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.824612    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.324473    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.823636    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:53.968254    4604 logs.go:282] 0 containers: []
	W1213 09:07:53.968254    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:53.971723    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:54.005821    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.005868    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:54.009997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:54.043633    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.043633    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:54.047702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:54.077692    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.077692    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:54.081464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:54.109644    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.109644    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:54.113266    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:54.141926    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.141926    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:54.145352    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:54.178100    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.178100    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:54.178100    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:54.178164    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:54.252196    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:54.252196    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:54.284935    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:54.285971    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:54.538213    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:54.538213    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:54.538213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:54.583090    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:54.583090    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:07:57.312809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:57.335927    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:57.368850    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.368850    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:57.372314    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:57.414423    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.414423    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:57.418091    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:57.445624    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.445624    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:57.450351    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:57.478804    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.478804    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:57.482347    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:57.515270    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.515270    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:57.519226    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:57.550203    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.550203    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:57.553796    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:57.581350    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.581350    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:57.581350    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:57.581350    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:57.643200    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:57.643200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:57.673988    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:57.673988    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:57.760392    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:57.760392    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:57.760392    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:57.802849    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:57.802849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:00.359379    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:00.382695    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:00.413789    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.413789    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:00.417939    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:00.446378    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.446378    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:00.449613    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:00.482176    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.482176    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:00.485918    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:00.515814    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.515814    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:00.519425    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:00.550561    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.550614    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:00.554312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:00.581925    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.582019    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:00.586945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:00.614309    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.614309    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:00.614309    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:00.614309    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:00.677303    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:00.677303    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:00.708357    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:00.708388    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:00.792820    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:00.792820    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:00.792820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:00.834035    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:00.834035    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:03.387456    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:03.409689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:03.440566    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.440566    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:03.446132    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:03.481808    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.481808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:03.484917    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:03.516053    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.516053    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:03.519249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:03.549448    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.549448    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:03.553206    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:03.580932    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.580932    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:03.585400    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:03.615096    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.615096    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:03.618691    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:03.650537    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.650537    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:03.650537    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:03.650537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:03.715560    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:03.715560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:03.745557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:03.745557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:03.830341    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:03.830341    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:03.830341    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:03.873599    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:03.873599    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.430406    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:06.454482    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:06.484232    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.484232    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:06.489209    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:06.519685    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.519685    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:06.523281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:06.552228    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.552228    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:06.556002    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:06.585247    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.585301    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:06.588771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:06.616709    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.616709    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:06.622086    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:06.649957    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.649957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:06.653592    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:06.684273    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.684273    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:06.684273    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:06.684273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:06.712577    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:06.712577    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:06.795376    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:06.795376    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:06.795898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:06.839065    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:06.839065    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.889079    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:06.889079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.455581    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:09.480052    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:09.512625    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.512625    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:09.516455    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:09.542431    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.542499    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:09.547418    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:09.577381    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.577381    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:09.581054    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:09.609734    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.609809    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:09.614960    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:09.640858    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.640858    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:09.644539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:09.673297    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.673324    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:09.676963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:09.706066    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.706097    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:09.706097    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:09.706097    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.770379    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:09.770379    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:09.800715    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:09.800715    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:09.888345    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:09.888366    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:09.888366    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:09.931503    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:09.931503    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:12.488194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:12.511945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:12.543092    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.543092    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:12.546813    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:12.575244    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.575244    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:12.579183    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:12.606211    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.606211    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:12.609921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:12.638793    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.638793    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:12.642301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:12.671214    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.671250    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:12.675013    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:12.704218    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.704218    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:12.708216    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:12.738811    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.738811    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:12.738811    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:12.738811    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:12.801161    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:12.801161    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:12.830060    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:12.831060    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:12.915147    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:12.915147    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:12.915147    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:12.956625    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:12.956625    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:15.510904    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:15.533124    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:15.562214    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.562214    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:15.565621    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:15.590955    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.591009    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:15.594833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:15.624408    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.624408    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:15.628727    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:15.659837    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.659837    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:15.663513    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:15.690393    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.690393    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:15.693797    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:15.724206    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.724206    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:15.730221    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:15.758038    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.758038    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:15.758038    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:15.758038    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:15.820934    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:15.820934    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:15.851382    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:15.851382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:15.931108    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:15.931108    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:15.931108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:15.972073    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:15.972073    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:18.529296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:18.551856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:18.582603    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.582603    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:18.586131    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:18.615914    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.615914    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:18.619071    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:18.647226    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.647314    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:18.650885    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:18.677834    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.677834    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:18.681465    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:18.710780    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.710819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:18.715047    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:18.742085    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.742085    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:18.746505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:18.773319    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.773319    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:18.773319    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:18.773374    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:18.837290    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:18.837290    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:18.866989    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:18.866989    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:18.948930    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:18.948930    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:18.948930    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:18.991657    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:18.991657    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:21.549759    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:21.572464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:21.600790    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.600818    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:21.604078    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:21.633799    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.633799    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:21.637744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:21.665485    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.665485    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:21.669376    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:21.699844    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.699844    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:21.706394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:21.735819    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.735819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:21.738827    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:21.766879    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.766879    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:21.770728    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:21.798832    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.798867    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:21.798867    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:21.798867    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:21.863860    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:21.863860    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:21.896284    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:21.896284    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:21.976382    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:21.976382    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:21.976382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:22.019285    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:22.019285    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:24.577418    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:24.603278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:24.639919    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.639919    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:24.643610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:24.669667    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.669690    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:24.672641    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:24.702942    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.702995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:24.706810    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:24.734192    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.734192    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:24.737895    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:24.769567    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.769597    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:24.773373    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:24.803190    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.803190    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:24.807117    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:24.838064    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.838064    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:24.838064    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:24.838138    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:24.901072    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:24.901072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:24.931306    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:24.931306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:25.017636    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:25.017636    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:25.017636    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:25.060810    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:25.060810    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:27.623166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:27.647045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:27.677340    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.677340    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:27.680821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:27.708576    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.708576    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:27.712514    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:27.743161    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.743161    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:27.746176    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:27.775854    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.775854    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:27.779689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:27.808373    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.808373    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:27.814962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:27.841903    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.841903    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:27.847177    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:27.876941    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.876941    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:27.876941    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:27.876941    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:27.937569    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:27.937569    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:27.967918    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:27.967918    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:28.051195    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:28.051195    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:28.051195    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:28.091557    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:28.091557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:30.648207    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:30.671041    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:30.701387    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.701387    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:30.705353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:30.736395    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.736395    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:30.740850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:30.768626    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.768704    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:30.772180    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:30.799431    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.799504    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:30.803459    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:30.831305    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.831305    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:30.835828    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:30.864498    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.864498    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:30.868346    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:30.895559    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.895559    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:30.895559    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:30.895559    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:30.960230    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:30.960230    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:30.989103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:30.989103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:31.064421    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:31.064516    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:31.064547    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:31.104938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:31.104938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:33.662266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:33.687669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:33.719674    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.719674    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:33.723494    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:33.753735    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.753735    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:33.757660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:33.785391    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.785391    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:33.789471    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:33.817747    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.817747    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:33.821119    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:33.849606    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.849635    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:33.852624    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:33.883011    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.883011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:33.886617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:33.914695    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.914695    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:33.914695    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:33.914695    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:33.977929    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:33.977929    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:34.008197    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:34.008197    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:34.087742    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:34.087742    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:34.087742    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:34.130894    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:34.130894    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:36.687878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:36.710647    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:36.741923    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.741956    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:36.745908    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:36.773011    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.773011    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:36.777059    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:36.806949    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.806949    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:36.811294    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:36.839274    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.839274    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:36.843833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:36.871615    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.871615    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:36.875410    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:36.904496    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.904496    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:36.908270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:36.937747    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.937747    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:36.937747    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:36.937747    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:37.017981    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:37.017981    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:37.018025    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:37.058111    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:37.058111    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:37.112070    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:37.112070    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:37.178407    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:37.178407    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.714817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:39.735622    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:39.767408    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.767408    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:39.771362    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:39.800883    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.800883    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:39.805233    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:39.833400    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.833400    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:39.837009    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:39.864328    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.864373    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:39.868165    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:39.895992    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.895992    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:39.899539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:39.926222    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.926294    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:39.929312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:39.957665    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.957738    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:39.957738    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:39.957738    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.986966    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:39.986966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:40.066305    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:40.066357    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:40.066357    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:40.109785    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:40.109785    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:40.157108    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:40.157134    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:42.726706    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:42.752650    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:42.783377    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.783401    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:42.786899    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:42.817139    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.817212    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:42.820862    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:42.847197    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.847268    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:42.850420    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:42.880094    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.880094    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:42.884146    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:42.913168    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.913168    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:42.916601    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:42.945059    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.945059    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:42.950263    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:42.978582    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.978603    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:42.978603    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:42.978603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:43.041879    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:43.041879    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:43.072317    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:43.072317    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:43.165917    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:43.165917    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:43.165917    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:43.207209    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:43.207209    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:45.761070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:45.783759    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:45.815346    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.815346    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:45.819219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:45.846414    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.846414    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:45.849850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:45.881303    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.881303    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:45.885203    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:45.911758    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.911758    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:45.915687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:45.946589    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.946589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:45.950051    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:45.976088    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.976088    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:45.979669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:46.011063    4604 logs.go:282] 0 containers: []
	W1213 09:08:46.011155    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:46.011155    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:46.011155    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:46.074019    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:46.075019    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:46.106619    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:46.106619    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:46.188897    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:46.188897    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:46.188897    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:46.229995    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:46.229995    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:48.789468    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:48.811354    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:48.842470    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.842470    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:48.848670    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:48.876329    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.876329    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:48.879989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:48.908565    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.908565    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:48.912255    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:48.948072    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.948072    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:48.951857    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:48.980030    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.980030    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:48.983447    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:49.016239    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.016239    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:49.022258    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:49.049950    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.049950    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:49.049950    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:49.049950    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:49.094252    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:49.094252    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:49.146427    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:49.146952    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:49.205850    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:49.205850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:49.235850    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:49.235850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:49.315580    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:51.820920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:51.843200    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:51.874270    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.874322    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:51.877687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:51.905886    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.905886    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:51.910483    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:51.937921    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.938207    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:51.942126    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:51.970152    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.970152    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:51.973777    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:52.005341    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.005341    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:52.011533    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:52.042004    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.042004    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:52.045665    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:52.073964    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.073964    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:52.073964    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:52.073964    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:52.136324    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:52.137327    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:52.167493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:52.167493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:52.247700    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:52.247700    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:52.247700    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:52.289002    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:52.289002    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:54.844809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:54.866930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:54.898229    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.898229    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:54.902031    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:54.932712    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.932712    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:54.936121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:54.963632    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.963632    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:54.967503    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:54.993576    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.993576    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:54.997842    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:55.025663    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.025663    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:55.029428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:55.057141    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.057141    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:55.061017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:55.089820    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.089820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:55.089820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:55.089820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:55.153977    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:55.154001    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:55.215966    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:55.215966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:55.244751    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:55.244751    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:55.322925    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:55.322925    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:55.322925    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:57.870018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:57.892445    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:57.923189    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.923189    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:57.926680    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:57.956979    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.956979    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:57.960468    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:57.989714    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.989714    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:57.994672    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:58.021349    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.021349    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:58.024912    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:58.053594    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.053594    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:58.057186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:58.086247    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.086247    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:58.089444    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:58.117375    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.117375    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:58.117375    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:58.117375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:58.159414    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:58.159414    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:58.213441    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:58.213441    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:58.275646    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:58.275646    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:58.307733    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:58.307733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:58.393941    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:00.900693    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:00.925586    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:00.954130    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.954130    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:00.957383    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:00.984796    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.984826    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:00.988339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:01.013943    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.013943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:01.017466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:01.045614    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.045614    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:01.049219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:01.077719    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.077719    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:01.083105    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:01.114373    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.114373    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:01.118034    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:01.145171    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.145171    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:01.145171    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:01.145171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:01.227391    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:01.227391    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:01.227391    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:01.266324    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:01.266324    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:01.318698    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:01.318698    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:01.379640    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:01.379640    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:03.917253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:03.941711    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:03.969911    4604 logs.go:282] 0 containers: []
	W1213 09:09:03.969911    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:03.973403    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:04.002458    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.002458    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:04.006090    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:04.034145    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.034145    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:04.037736    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:04.063991    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.063991    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:04.066963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:04.096807    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.096807    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:04.100249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:04.128437    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.128437    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:04.132074    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:04.160225    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.160225    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:04.160225    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:04.160225    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:04.222581    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:04.222581    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:04.251920    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:04.251920    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:04.333622    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:04.333622    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:04.333622    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:04.373214    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:04.373214    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:06.935527    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:06.958474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:06.990564    4604 logs.go:282] 0 containers: []
	W1213 09:09:06.990564    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:06.994406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:07.025506    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.025506    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:07.029905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:07.060066    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.060066    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:07.063610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:07.091922    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.092007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:07.095595    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:07.124460    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.124496    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:07.128147    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:07.157131    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.157131    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:07.160743    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:07.191500    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.191500    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:07.191500    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:07.191500    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:07.242194    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:07.242273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:07.302067    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:07.302067    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:07.333088    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:07.333088    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:07.415000    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:07.415000    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:07.415000    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:09.963522    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:09.986505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:10.023010    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.023010    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:10.026202    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:10.057866    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.057945    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:10.061802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:10.089523    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.089523    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:10.092989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:10.124941    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.124941    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:10.128882    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:10.157336    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.157336    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:10.160838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:10.186957    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.186957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:10.190881    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:10.219557    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.219557    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:10.219557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:10.219557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:10.298159    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:10.298159    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:10.298159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:10.338779    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:10.338779    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:10.385337    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:10.385337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:10.445911    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:10.445911    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:12.983669    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:13.005971    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:13.038383    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.038383    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:13.041755    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:13.071860    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.071860    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:13.075101    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:13.104117    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.104198    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:13.107582    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:13.137511    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.137511    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:13.142951    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:13.170239    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.170239    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:13.174246    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:13.204251    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.204251    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:13.207747    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:13.235835    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.235835    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:13.235835    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:13.235835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:13.299873    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:13.300878    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:13.331103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:13.331103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:13.409680    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:13.409714    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:13.409714    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:13.454882    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:13.454882    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.009703    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:16.033721    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:16.065430    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.065430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:16.069567    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:16.096385    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.096459    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:16.099989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:16.127782    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.127782    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:16.130994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:16.161401    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.161401    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:16.165139    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:16.193589    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.193589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:16.197319    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:16.226572    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.226607    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:16.230538    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:16.257820    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.257820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:16.257820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:16.257820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.308467    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:16.308467    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:16.371370    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:16.371370    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:16.400835    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:16.400835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:16.485671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:16.485701    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:16.485701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.036505    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:19.061114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:19.095852    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.095852    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:19.099353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:19.131781    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.131781    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:19.134812    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:19.165823    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.165823    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:19.169019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:19.198392    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.198392    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:19.203290    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:19.233051    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.233051    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:19.237259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:19.263869    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.263869    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:19.268019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:19.296220    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.296220    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:19.296220    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:19.296220    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:19.359981    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:19.359981    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:19.391692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:19.391692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:19.476176    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:19.476176    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:19.476176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.518567    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:19.518567    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:22.072334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:22.095545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:22.126659    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.126690    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:22.130501    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:22.160329    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.160363    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:22.164108    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:22.193702    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.193732    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:22.196904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:22.225415    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.225415    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:22.228719    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:22.258896    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.258896    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:22.262806    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:22.289609    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.289609    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:22.293253    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:22.323681    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.323681    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:22.323681    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:22.323681    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:22.386923    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:22.386923    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:22.416353    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:22.416353    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:22.498735    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:22.498735    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:22.498735    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:22.550754    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:22.550754    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.111955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:25.134114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:25.160988    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.160988    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:25.164339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:25.195249    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.195249    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:25.198638    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:25.225490    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.225490    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:25.231098    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:25.257691    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.257691    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:25.261515    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:25.287683    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.287683    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:25.293213    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:25.319319    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.319319    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:25.322958    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:25.354108    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.354108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:25.354198    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:25.354198    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:25.397011    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:25.397011    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.455292    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:25.455292    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:25.517423    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:25.517423    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:25.546322    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:25.546322    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:25.627826    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.133991    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:28.156525    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:28.184733    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.184733    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:28.188704    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:28.216710    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.216710    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:28.220744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:28.249082    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.249082    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:28.252646    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:28.284289    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.284289    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:28.288332    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:28.314796    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.314796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:28.321406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:28.350295    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.350295    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:28.353850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:28.382048    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.382048    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:28.382048    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:28.382048    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:28.444457    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:28.444457    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:28.475310    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:28.475337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:28.562628    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.562628    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:28.562628    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:28.605307    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:28.605307    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:31.165266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:31.186966    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:31.222005    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.222066    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:31.225186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:31.256308    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.256308    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:31.260088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:31.287293    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.287293    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:31.290982    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:31.319241    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.319241    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:31.322581    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:31.350058    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.350128    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:31.353584    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:31.380173    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.380212    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:31.384070    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:31.411239    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.411239    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:31.411239    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:31.411239    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:31.477283    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:31.477283    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:31.507500    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:31.508020    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:31.597314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:31.597314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:31.597314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:31.635938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:31.635938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.189996    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:34.212398    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:34.238809    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.238809    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:34.242256    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:34.270112    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.270112    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:34.273875    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:34.303456    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.303456    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:34.307522    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:34.338016    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.338016    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:34.341872    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:34.368952    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.368952    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:34.374198    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:34.405261    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.405261    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:34.408381    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:34.435072    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.435072    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:34.435072    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:34.435072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:34.515381    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:34.515381    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:34.515381    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:34.573241    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:34.573241    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.623650    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:34.624178    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:34.682935    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:34.682935    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:37.219569    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:37.242545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:37.272082    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.272082    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:37.275835    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:37.304181    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.304181    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:37.307884    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:37.335943    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.335943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:37.339864    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:37.377566    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.377566    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:37.382018    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:37.412404    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.412404    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:37.416038    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:37.442722    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.442722    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:37.446771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:37.474398    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.474398    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:37.474398    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:37.474398    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:37.577898    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:37.577898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:37.577898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:37.620560    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:37.620560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:37.669632    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:37.669632    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:37.734142    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:37.734142    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.271884    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:40.294824    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:40.321888    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.321888    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:40.325505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:40.353723    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.353808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:40.357193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:40.386522    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.386522    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:40.391186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:40.418547    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.418547    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:40.425278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:40.455783    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.455783    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:40.459890    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:40.489966    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.489966    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:40.493703    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:40.538181    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.538181    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:40.538253    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:40.538253    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:40.601826    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:40.601826    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.631898    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:40.631898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:40.713071    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:40.713071    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:40.713071    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:40.755270    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:40.755270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.309018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:43.331107    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:43.365765    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.365765    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:43.369683    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:43.396582    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.396582    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:43.400512    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:43.429185    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.429185    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:43.432708    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:43.463128    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.463128    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:43.466133    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:43.496082    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.496082    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:43.500151    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:43.537578    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.537578    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:43.541441    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:43.569477    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.569477    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:43.569477    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:43.569521    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.620575    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:43.620575    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:43.681515    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:43.681515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:43.710447    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:43.710447    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:43.793119    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:43.793119    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:43.793119    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.339779    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:46.362296    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:46.391878    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.391878    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:46.395830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:46.424203    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.424203    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:46.427838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:46.456024    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.456024    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:46.460057    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:46.488187    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.488187    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:46.493831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:46.533872    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.533872    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:46.540390    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:46.568011    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.568011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:46.571702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:46.602586    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.602653    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:46.602653    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:46.602653    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:46.662280    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:46.662280    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:46.693557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:46.693557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:46.782210    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:46.782210    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:46.782210    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.823701    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:46.823701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.384298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:49.407707    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:49.438420    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.438420    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:49.442231    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:49.470770    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.470770    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:49.473919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:49.504515    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.504546    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:49.508487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:49.547082    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.547082    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:49.551548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:49.578796    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.578796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:49.582281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:49.608530    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.608530    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:49.611741    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:49.639231    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.639231    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:49.639231    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:49.639231    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.689389    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:49.689389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:49.753229    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:49.753229    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:49.783294    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:49.783294    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:49.864270    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:49.864270    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:49.864270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.412975    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:52.439979    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:52.475193    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.475193    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:52.479114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:52.510741    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.510741    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:52.514487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:52.557360    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.557360    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:52.561448    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:52.588077    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.588077    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:52.591539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:52.621182    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.621182    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:52.624734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:52.650838    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.650838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:52.655565    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:52.686451    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.686451    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:52.686451    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:52.686528    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:52.747788    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:52.747788    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:52.781834    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:52.782825    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:52.860287    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:52.860362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:52.860362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.905051    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:52.905051    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.461925    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:55.484035    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:55.517116    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.517116    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:55.522844    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:55.553488    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.553488    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:55.557370    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:55.589995    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.589995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:55.595259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:55.622638    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.622707    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:55.626066    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:55.652752    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.652752    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:55.657065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:55.685386    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.685407    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:55.689428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:55.717051    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.717051    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:55.717051    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:55.717120    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:55.758337    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:55.758337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.822375    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:55.822375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:55.885080    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:55.885080    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:55.917741    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:55.917741    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:55.995300    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.500574    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:58.521337    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:58.548629    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.548629    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:58.551546    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:58.581100    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.581100    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:58.586220    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:58.613906    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.613906    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:58.617469    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:58.644238    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.644292    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:58.648344    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:58.678031    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.678031    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:58.681474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:58.707025    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.707025    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:58.710542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:58.742746    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.742770    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:58.742770    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:58.742770    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:58.805849    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:58.805849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:58.837389    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:58.837389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:58.917868    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.917899    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:58.917899    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:58.959951    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:58.959951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:01.514466    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:01.535932    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:01.567037    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.567037    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:01.571145    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:01.595775    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.595775    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:01.599771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:01.629170    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.629170    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:01.632128    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:01.662382    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.662382    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:01.665517    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:01.693368    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.693368    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:01.696830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:01.724611    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.724611    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:01.728207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:01.755432    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.755432    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:01.755432    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:01.755432    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:01.821399    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:01.821399    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:01.852579    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:01.853099    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:01.934160    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:01.934160    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:01.934160    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:01.976648    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:01.976648    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:04.534486    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:04.556301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:04.587516    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.587516    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:04.591921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:04.621299    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.621371    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:04.625334    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:04.653954    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.653954    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:04.657436    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:04.686845    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.686845    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:04.690201    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:04.718702    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.718702    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:04.722366    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:04.750970    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.750970    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:04.754283    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:04.783682    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.783682    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:04.783682    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:04.783682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:04.844699    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:04.844699    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:04.875813    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:04.875813    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:04.953200    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:04.953200    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:04.953200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:04.993306    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:04.993306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:07.543188    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:07.566411    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:07.596022    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.596022    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:07.599737    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:07.627899    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.627899    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:07.631860    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:07.661281    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.661281    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:07.665185    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:07.695914    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.695914    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:07.699555    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:07.732011    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.732058    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:07.736521    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:07.769602    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.769602    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:07.773486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:07.802107    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.802107    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:07.802107    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:07.802107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:07.864516    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:07.864516    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:07.896513    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:07.896513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:07.973085    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:07.973085    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:07.973085    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:08.014869    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:08.014869    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:10.570544    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:10.592396    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:10.624974    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.624974    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:10.629502    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:10.657201    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.657201    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:10.660591    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:10.687563    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.687563    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:10.691289    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:10.721420    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.721420    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:10.724919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:10.752211    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.752211    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:10.755905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:10.784215    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.784215    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:10.788207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:10.816951    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.816951    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:10.816951    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:10.816951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:10.879172    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:10.879172    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:10.908202    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:10.908202    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:10.986325    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:10.986325    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:10.986325    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:11.027515    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:11.027515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:13.588427    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:13.611368    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:13.644873    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.644873    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:13.648808    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:13.677881    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.677942    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:13.682617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:13.712870    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.712870    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:13.716696    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:13.744007    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.744007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:13.748548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:13.777967    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.778011    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:13.781321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:13.809271    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.809271    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:13.813285    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:13.840555    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.840555    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:13.840555    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:13.840555    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:13.904251    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:13.904251    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:13.935133    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:13.935133    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:14.016449    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:14.016449    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:14.016449    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:14.057706    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:14.057706    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:16.615756    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:16.638088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:16.670041    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.670041    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:16.673924    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:16.704163    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.704163    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:16.710097    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:16.740700    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.740700    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:16.744219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:16.771219    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.771219    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:16.774904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:16.804658    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.804658    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:16.808110    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:16.837026    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.837026    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:16.840957    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:16.869149    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.869149    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:16.869149    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:16.869149    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:16.933545    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:16.933545    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:16.964296    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:16.964296    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:17.040603    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:17.040603    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:17.040603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:17.083647    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:17.083647    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.650764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:19.674143    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:19.702643    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.702643    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:19.707045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:19.734166    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.734166    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:19.738121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:19.767856    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.767856    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:19.771207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:19.801742    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.801819    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:19.805222    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:19.833321    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.833321    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:19.836856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:19.863434    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.863465    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:19.867234    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:19.897054    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.897054    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:19.897054    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:19.897054    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.946805    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:19.946805    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:20.007213    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:20.007213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:20.036248    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:20.036248    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:20.114272    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:20.114272    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:20.114272    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:22.659210    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:22.681874    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:22.711856    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.711856    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:22.715662    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:22.744003    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.744003    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:22.748080    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:22.778409    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.778409    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:22.781997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:22.809533    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.809557    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:22.812700    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:22.842593    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.842593    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:22.846788    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:22.874683    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.874683    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:22.878045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:22.906027    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.906027    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:22.906088    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:22.906107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:22.970513    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:22.970513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:23.000755    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:23.000755    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:23.084733    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:23.084733    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:23.084733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:23.127257    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:23.127257    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.686782    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:25.709380    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:25.738484    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.738484    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:25.742065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:25.770152    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.770152    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:25.774113    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:25.803290    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.803290    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:25.807361    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:25.834734    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.834734    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:25.838734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:25.865666    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.865666    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:25.869046    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:25.896838    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.896838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:25.900312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:25.930732    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.930732    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:25.930732    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:25.930732    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.980958    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:25.980958    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:26.041855    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:26.041855    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:26.073493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:26.073493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:26.159584    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:26.159584    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:26.159584    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:28.707550    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:28.729858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:28.759846    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.759846    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:28.763596    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:28.794012    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.794012    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:28.797789    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:28.826515    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.826515    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:28.829640    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:28.861520    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.861520    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:28.864944    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:28.893275    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.893303    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:28.896907    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:28.923381    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.923381    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:28.928293    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:28.960491    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.960491    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:28.960491    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:28.960491    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:29.022787    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:29.022787    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:29.053784    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:29.053784    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:29.136856    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:29.136898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:29.136898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:29.179176    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:29.179176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:31.733518    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:31.756802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:31.790216    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.790216    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:31.793805    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:31.824397    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.824397    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:31.829526    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:31.857889    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.857889    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:31.861193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:31.890304    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.890304    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:31.893795    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:31.921856    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.921927    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:31.924962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:31.953806    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.953837    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:31.957466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:31.987829    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.987829    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:31.987829    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:31.987829    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:32.034063    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:32.034063    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:32.096079    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:32.096079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:32.126955    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:32.126955    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:32.209100    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:32.209100    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:32.209100    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:34.755896    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:34.779017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:34.808294    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.808366    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:34.811869    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:34.839872    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.839938    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:34.843685    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:34.871636    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.871636    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:34.875660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:34.903443    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.903443    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:34.907770    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:34.935581    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.935581    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:34.939767    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:34.969814    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.969814    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:34.973317    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:35.003474    4604 logs.go:282] 0 containers: []
	W1213 09:10:35.003474    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:35.003474    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:35.003537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:35.066261    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:35.066261    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:35.097692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:35.097692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:35.180207    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:35.180207    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:35.180207    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:35.223159    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:35.223159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:37.780314    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:37.804001    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:37.835430    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.835430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:37.839042    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:37.867680    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.867699    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:37.870898    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:37.902798    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.902798    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:37.906542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:37.934985    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.935050    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:37.938192    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:37.969111    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.969111    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:37.972848    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:38.002751    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.002751    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:38.006552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:38.035033    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.035033    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:38.035033    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:38.035033    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:38.086087    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:38.086611    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:38.147832    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:38.147832    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:38.180233    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:38.180233    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:38.261008    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:38.261008    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:38.261008    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:40.811191    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:40.833394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:40.865083    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.865083    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:40.868858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:40.900204    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.900204    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:40.903500    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:40.930103    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.930103    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:40.933495    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:40.960744    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.960744    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:40.964475    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:40.990935    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.990935    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:40.995048    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:41.022706    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.022706    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:41.026451    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:41.056906    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.056906    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:41.056906    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:41.056906    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:41.115470    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:41.115470    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:41.143967    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:41.143967    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:41.232682    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:41.232682    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:41.232682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:41.274641    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:41.274641    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:43.828677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:43.852994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:43.886713    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.886713    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:43.890625    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:43.919501    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.919501    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:43.923426    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:43.951987    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.951987    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:43.955937    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:43.985130    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.985130    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:43.988484    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:44.018258    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.018258    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:44.022302    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:44.050666    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.050666    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:44.054876    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:44.085108    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.085108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:44.085108    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:44.085108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:44.112809    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:44.112809    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:44.193362    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:44.193362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:44.193362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:44.237334    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:44.237334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:44.289034    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:44.289034    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:46.855055    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:46.878443    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:46.909614    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.909614    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:46.916327    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:46.944603    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.944603    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:46.948050    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:46.976487    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.976487    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:46.980498    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:47.008131    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.008131    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:47.011552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:47.039887    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.039887    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:47.043570    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:47.072161    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.072161    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:47.075765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:47.105843    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.105843    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:47.105843    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:47.105843    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:47.168444    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:47.168444    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:47.198734    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:47.198734    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:47.280671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:47.280671    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:47.280671    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:47.322808    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:47.322808    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:49.882724    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:49.904378    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:49.936667    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.936667    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:49.939740    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:49.973628    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.973628    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:49.977831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:50.008373    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.008452    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:50.013016    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:50.043104    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.043104    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:50.046855    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:50.078353    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.078353    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:50.082270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:50.113856    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.113856    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:50.118930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:50.148208    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.148208    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:50.148208    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:50.148208    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:50.214697    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:50.214697    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:50.243820    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:50.243820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:50.331549    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:50.331549    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:50.331549    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:50.372171    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:50.372171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:52.928403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:52.950923    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:52.979279    4604 logs.go:282] 0 containers: []
	W1213 09:10:52.979307    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:52.982821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:53.012984    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.013051    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:53.016321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:53.046839    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.046839    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:53.051164    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:53.080161    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.080161    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:53.083793    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:53.117152    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.117152    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:53.120486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:53.150543    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.150543    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:53.154171    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:53.184334    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.184334    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:53.184334    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:53.184334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:53.228630    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:53.228630    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:53.282521    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:53.282558    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:53.346952    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:53.346991    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:53.373976    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:53.373976    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:53.455812    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:55.961126    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:55.980524    4604 kubeadm.go:602] duration metric: took 4m3.6754433s to restartPrimaryControlPlane
	W1213 09:10:55.980524    4604 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 09:10:55.985356    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:10:56.635426    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:56.658380    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:10:56.677797    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:10:56.682473    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:10:56.699107    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:10:56.699107    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:10:56.703291    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:10:56.719044    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:10:56.723277    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:10:56.742780    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:10:56.756514    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:10:56.760505    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:10:56.780196    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.793888    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:10:56.798332    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.817764    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:10:56.829936    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:10:56.833707    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:10:56.849696    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:10:56.965661    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:10:57.051298    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:10:57.163109    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:14:58.077510    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:14:58.077510    4604 kubeadm.go:319] 
	I1213 09:14:58.077700    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:14:58.082513    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:14:58.082513    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:14:58.083105    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:14:58.083105    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:14:58.083630    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:14:58.084184    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:14:58.084411    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:14:58.084511    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:14:58.084637    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:14:58.084788    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:14:58.084950    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:14:58.085561    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:14:58.085629    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:14:58.085787    4604 kubeadm.go:319] OS: Linux
	I1213 09:14:58.085905    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:14:58.085994    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:14:58.086095    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:14:58.086249    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:14:58.086375    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:14:58.086436    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:14:58.086559    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:14:58.086680    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:14:58.086776    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:14:58.087006    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:14:58.087282    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:14:58.091333    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:14:58.091861    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:14:58.092898    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:14:58.096150    4604 out.go:252]   - Booting up control plane ...
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:14:58.097140    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00081318s
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.097140    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.098169    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 
	W1213 09:14:58.098169    4604 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00081318s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:14:58.103247    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:14:58.557280    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:14:58.576227    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:14:58.580590    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:14:58.591916    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:14:58.591916    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:14:58.597377    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:14:58.611245    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:14:58.615321    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:14:58.633996    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:14:58.647865    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:14:58.651889    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:14:58.669442    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.682787    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:14:58.687832    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.708348    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:14:58.722058    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:14:58.727337    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:14:58.747003    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:14:58.861078    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:14:58.943511    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:14:59.043878    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:18:59.702905    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:18:59.702984    4604 kubeadm.go:319] 
	I1213 09:18:59.703100    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:18:59.706956    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:18:59.706956    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:18:59.708169    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:18:59.708169    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:18:59.708812    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:18:59.709865    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:18:59.710067    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:18:59.710115    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:18:59.710268    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:18:59.710360    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:18:59.710543    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:18:59.710612    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:18:59.710694    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:18:59.710783    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] OS: Linux
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:18:59.711409    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:18:59.711492    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:18:59.711623    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:18:59.711691    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:18:59.711874    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:18:59.712056    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:18:59.712280    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:18:59.712416    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:18:59.717830    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:18:59.722958    4604 out.go:252]   - Booting up control plane ...
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:18:59.723960    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001708609s
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.725960    4604 kubeadm.go:403] duration metric: took 12m7.4678993s to StartCluster
	I1213 09:18:59.725960    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:18:59.729959    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:18:59.791539    4604 cri.go:89] found id: ""
	I1213 09:18:59.791620    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.791620    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:18:59.791620    4604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 09:18:59.796126    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:18:59.838188    4604 cri.go:89] found id: ""
	I1213 09:18:59.838188    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.838188    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:18:59.838188    4604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 09:18:59.842219    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:18:59.886873    4604 cri.go:89] found id: ""
	I1213 09:18:59.886928    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.886928    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:18:59.886959    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:18:59.891184    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:18:59.935247    4604 cri.go:89] found id: ""
	I1213 09:18:59.935247    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.935247    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:18:59.935247    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:18:59.940658    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:18:59.979678    4604 cri.go:89] found id: ""
	I1213 09:18:59.979678    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.979678    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:18:59.979678    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:18:59.984360    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:19:00.029429    4604 cri.go:89] found id: ""
	I1213 09:19:00.029429    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.029429    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:19:00.029429    4604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 09:19:00.034206    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:19:00.078417    4604 cri.go:89] found id: ""
	I1213 09:19:00.078417    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.078417    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:19:00.078417    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:19:00.078417    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:19:00.158314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:19:00.158314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:19:00.158314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:19:00.200907    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:19:00.201904    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:19:00.251291    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:19:00.251291    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:19:00.314330    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:19:00.314330    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 09:19:00.346177    4604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:19:00.346280    4604 out.go:285] * 
	W1213 09:19:00.346392    4604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.346427    4604 out.go:285] * 
	W1213 09:19:00.348597    4604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:19:00.354189    4604 out.go:203] 
	W1213 09:19:00.361975    4604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.362101    4604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:19:00.362101    4604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:19:00.368166    4604 out.go:203] 
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:19:56.735395   41776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:56.737590   41776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:56.738953   41776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:56.740239   41776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:56.742350   41776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:19:56 up 56 min,  0 user,  load average: 0.38, 0.34, 0.43
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:19:52 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:53 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 392.
	Dec 13 09:19:53 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:53 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:53 functional-482100 kubelet[41614]: E1213 09:19:53.732562   41614 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:53 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:53 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:54 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 13 09:19:54 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:54 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:54 functional-482100 kubelet[41625]: E1213 09:19:54.498728   41625 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:54 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:54 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:55 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 13 09:19:55 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:55 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:55 functional-482100 kubelet[41654]: E1213 09:19:55.231597   41654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:55 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:55 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:19:55 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 13 09:19:55 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:55 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:19:56 functional-482100 kubelet[41681]: E1213 09:19:56.292601   41681 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:19:56 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:19:56 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (608.0232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (54.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-482100 apply -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-482100 apply -f testdata\invalidsvc.yaml: exit status 1 (20.2005194s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:63845/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-482100 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 status: exit status 2 (598.2496ms)

                                                
                                                
-- stdout --
	functional-482100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-482100 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (581.846ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-482100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 status -o json: exit status 2 (591.2247ms)

                                                
                                                
-- stdout --
	{"Name":"functional-482100","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-482100 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (548.4462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.2701948s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                 ARGS                                                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-482100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ ssh     │ functional-482100 ssh echo hello                                                                                                                                                                     │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ tunnel  │ functional-482100 tunnel --alsologtostderr                                                                                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ config  │ functional-482100 config unset cpus                                                                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ tunnel  │ functional-482100 tunnel --alsologtostderr                                                                                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ cp      │ functional-482100 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ config  │ functional-482100 config get cpus                                                                                                                                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ config  │ functional-482100 config set cpus 2                                                                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ tunnel  │ functional-482100 tunnel --alsologtostderr                                                                                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ ssh     │ functional-482100 ssh cat /etc/hostname                                                                                                                                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh     │ functional-482100 ssh -n functional-482100 sudo cat /home/docker/cp-test.txt                                                                                                                         │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ config  │ functional-482100 config get cpus                                                                                                                                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ config  │ functional-482100 config unset cpus                                                                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ config  │ functional-482100 config get cpus                                                                                                                                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ cp      │ functional-482100 cp functional-482100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp315632686\001\cp-test.txt │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ addons  │ functional-482100 addons list                                                                                                                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ addons  │ functional-482100 addons list -o json                                                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh     │ functional-482100 ssh -n functional-482100 sudo cat /home/docker/cp-test.txt                                                                                                                         │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ cp      │ functional-482100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                            │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh     │ functional-482100 ssh -n functional-482100 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ service │ functional-482100 service list                                                                                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service │ functional-482100 service list -o json                                                                                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service │ functional-482100 service --namespace=default --https --url hello-node                                                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service │ functional-482100 service hello-node --url --format={{.IP}}                                                                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service │ functional-482100 service hello-node --url                                                                                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:06:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:06:42.717723    4604 out.go:360] Setting OutFile to fd 964 ...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.759720    4604 out.go:374] Setting ErrFile to fd 1684...
	I1213 09:06:42.759720    4604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:42.775684    4604 out.go:368] Setting JSON to false
	I1213 09:06:42.778565    4604 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2610,"bootTime":1765614192,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:06:42.778565    4604 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:06:42.783192    4604 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:06:42.786187    4604 notify.go:221] Checking for updates...
	I1213 09:06:42.786345    4604 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:06:42.788643    4604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:06:42.791579    4604 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:06:42.793982    4604 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:06:42.796424    4604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:06:42.798851    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:42.799423    4604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:06:42.991260    4604 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:06:42.994416    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.223298    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.202416057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.228742    4604 out.go:179] * Using the docker driver based on existing profile
	I1213 09:06:43.237191    4604 start.go:309] selected driver: docker
	I1213 09:06:43.237191    4604 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.238191    4604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:06:43.244191    4604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:43.469724    4604 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-13 09:06:43.451401286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:06:43.566702    4604 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:06:43.567247    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:43.567332    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:43.567332    4604 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:43.571338    4604 out.go:179] * Starting "functional-482100" primary control-plane node in "functional-482100" cluster
	I1213 09:06:43.574242    4604 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 09:06:43.576258    4604 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:06:43.580317    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:43.580377    4604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:06:43.580526    4604 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 09:06:43.580526    4604 cache.go:65] Caching tarball of preloaded images
	I1213 09:06:43.580984    4604 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 09:06:43.581085    4604 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 09:06:43.581294    4604 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\config.json ...
	I1213 09:06:43.661395    4604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:06:43.661446    4604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:06:43.661502    4604 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:06:43.661597    4604 start.go:360] acquireMachinesLock for functional-482100: {Name:mkdbad0c5d0c221588a4a9490c5c0730668b0a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:06:43.661744    4604 start.go:364] duration metric: took 97.5µs to acquireMachinesLock for "functional-482100"
	I1213 09:06:43.661894    4604 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:06:43.661968    4604 fix.go:54] fixHost starting: 
	I1213 09:06:43.668789    4604 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
	I1213 09:06:43.726255    4604 fix.go:112] recreateIfNeeded on functional-482100: state=Running err=<nil>
	W1213 09:06:43.726255    4604 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:06:43.729251    4604 out.go:252] * Updating the running docker "functional-482100" container ...
	I1213 09:06:43.729251    4604 machine.go:94] provisionDockerMachine start ...
	I1213 09:06:43.733252    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:43.788369    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:43.788946    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:43.788946    4604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:06:43.970841    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:43.970841    4604 ubuntu.go:182] provisioning hostname "functional-482100"
	I1213 09:06:43.974885    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.031548    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.032011    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.032011    4604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-482100 && echo "functional-482100" | sudo tee /etc/hostname
	I1213 09:06:44.226185    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-482100
	
	I1213 09:06:44.230480    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.283942    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:44.284648    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:44.284648    4604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-482100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-482100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-482100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:06:44.459239    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:44.459239    4604 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 09:06:44.459239    4604 ubuntu.go:190] setting up certificates
	I1213 09:06:44.459239    4604 provision.go:84] configureAuth start
	I1213 09:06:44.464098    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:44.517408    4604 provision.go:143] copyHostCerts
	I1213 09:06:44.518409    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 09:06:44.518409    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 09:06:44.518409    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 09:06:44.519524    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 09:06:44.519524    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 09:06:44.519524    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 09:06:44.520761    4604 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 09:06:44.520761    4604 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 09:06:44.520761    4604 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 09:06:44.521333    4604 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-482100 san=[127.0.0.1 192.168.49.2 functional-482100 localhost minikube]
	I1213 09:06:44.683862    4604 provision.go:177] copyRemoteCerts
	I1213 09:06:44.688852    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:06:44.691943    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:44.744886    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:44.879038    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:06:44.911005    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:06:44.941373    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:06:44.969809    4604 provision.go:87] duration metric: took 510.5655ms to configureAuth
	I1213 09:06:44.969809    4604 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:06:44.970648    4604 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:06:44.974094    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.031966    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.032404    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.032404    4604 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 09:06:45.211091    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 09:06:45.211091    4604 ubuntu.go:71] root file system type: overlay
	I1213 09:06:45.211091    4604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 09:06:45.214999    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.278005    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.278423    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.278519    4604 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 09:06:45.475276    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 09:06:45.478711    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.533172    4604 main.go:143] libmachine: Using SSH client type: native
	I1213 09:06:45.533745    4604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 63841 <nil> <nil>}
	I1213 09:06:45.533745    4604 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 09:06:45.728810    4604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:06:45.728810    4604 machine.go:97] duration metric: took 1.999543s to provisionDockerMachine
	I1213 09:06:45.728810    4604 start.go:293] postStartSetup for "functional-482100" (driver="docker")
	I1213 09:06:45.728810    4604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:06:45.732939    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:06:45.736061    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:45.792193    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:45.929940    4604 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:06:45.938024    4604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:06:45.938024    4604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:06:45.938024    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 09:06:45.939007    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 09:06:45.940034    4604 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I1213 09:06:45.944509    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I1213 09:06:45.956570    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 09:06:45.988344    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I1213 09:06:46.020180    4604 start.go:296] duration metric: took 291.3676ms for postStartSetup
	I1213 09:06:46.024635    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:06:46.027628    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.080253    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.215093    4604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:06:46.224875    4604 fix.go:56] duration metric: took 2.5628868s for fixHost
	I1213 09:06:46.224875    4604 start.go:83] releasing machines lock for "functional-482100", held for 2.5631106s
	I1213 09:06:46.227979    4604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-482100
	I1213 09:06:46.281460    4604 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 09:06:46.284589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.284589    4604 ssh_runner.go:195] Run: cat /version.json
	I1213 09:06:46.287589    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:46.339381    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	I1213 09:06:46.341884    4604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
	W1213 09:06:46.471031    4604 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 09:06:46.475772    4604 ssh_runner.go:195] Run: systemctl --version
	I1213 09:06:46.491471    4604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:06:46.501246    4604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:06:46.506902    4604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:06:46.521536    4604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:06:46.521536    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:46.521536    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:46.521536    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:46.547922    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:06:46.569619    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:06:46.584943    4604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:06:46.588980    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1213 09:06:46.598267    4604 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 09:06:46.598267    4604 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 09:06:46.612904    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.631660    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:06:46.651016    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:06:46.672904    4604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:06:46.691930    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:06:46.710477    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:06:46.730250    4604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:06:46.750913    4604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:06:46.770554    4604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:06:46.792378    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.034402    4604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:06:47.276302    4604 start.go:496] detecting cgroup driver to use...
	I1213 09:06:47.276363    4604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:06:47.280722    4604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 09:06:47.305066    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.327135    4604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:06:47.404977    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:06:47.431107    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:06:47.450015    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:06:47.478646    4604 ssh_runner.go:195] Run: which cri-dockerd
	I1213 09:06:47.491243    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 09:06:47.503124    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 09:06:47.527239    4604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 09:06:47.667767    4604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 09:06:47.799062    4604 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 09:06:47.799062    4604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 09:06:47.826470    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 09:06:47.848448    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:47.994955    4604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 09:06:48.954293    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:06:48.976829    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 09:06:49.001926    4604 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 09:06:49.028432    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.050748    4604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 09:06:49.205807    4604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 09:06:49.342941    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.483831    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 09:06:49.508934    4604 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 09:06:49.531916    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:49.703017    4604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 09:06:49.814910    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 09:06:49.832973    4604 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 09:06:49.837568    4604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 09:06:49.846585    4604 start.go:564] Will wait 60s for crictl version
	I1213 09:06:49.850486    4604 ssh_runner.go:195] Run: which crictl
	I1213 09:06:49.861564    4604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:06:49.905261    4604 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 09:06:49.909293    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.949851    4604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 09:06:49.999228    4604 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 09:06:50.003267    4604 cli_runner.go:164] Run: docker exec -t functional-482100 dig +short host.docker.internal
	I1213 09:06:50.178404    4604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 09:06:50.184053    4604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 09:06:50.194897    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:50.254370    4604 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 09:06:50.256155    4604 kubeadm.go:884] updating cluster {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:06:50.256766    4604 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 09:06:50.259593    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.291635    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.291635    4604 docker.go:621] Images already preloaded, skipping extraction
	I1213 09:06:50.295568    4604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 09:06:50.325004    4604 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-482100
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1213 09:06:50.325004    4604 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:06:50.325004    4604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1213 09:06:50.325004    4604 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-482100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:06:50.328257    4604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 09:06:50.622080    4604 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 09:06:50.622145    4604 cni.go:84] Creating CNI manager for ""
	I1213 09:06:50.622145    4604 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:06:50.622208    4604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:06:50.622208    4604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-482100 NodeName:functional-482100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:06:50.622373    4604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-482100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:06:50.626372    4604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:06:50.640912    4604 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:06:50.644769    4604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:06:50.657199    4604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 09:06:50.677193    4604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:06:50.697253    4604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1213 09:06:50.723871    4604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:06:50.735113    4604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:06:50.895085    4604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:06:51.205789    4604 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100 for IP: 192.168.49.2
	I1213 09:06:51.205789    4604 certs.go:195] generating shared ca certs ...
	I1213 09:06:51.205789    4604 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:06:51.206694    4604 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 09:06:51.206931    4604 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 09:06:51.207202    4604 certs.go:257] generating profile certs ...
	I1213 09:06:51.207247    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\client.key
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key.13621831
	I1213 09:06:51.207958    4604 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key
	I1213 09:06:51.208796    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 09:06:51.208796    4604 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 09:06:51.209325    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 09:06:51.209671    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 09:06:51.210415    4604 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 09:06:51.211988    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:06:51.241166    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:06:51.270190    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:06:51.305732    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:06:51.336212    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:06:51.365643    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:06:51.395250    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:06:51.426424    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-482100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:06:51.456416    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:06:51.485568    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 09:06:51.513607    4604 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 09:06:51.544659    4604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:06:51.569245    4604 ssh_runner.go:195] Run: openssl version
	I1213 09:06:51.589082    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.610612    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:06:51.632111    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.640287    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.644860    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:06:51.695068    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:06:51.712089    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.730159    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 09:06:51.750455    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.759490    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.764057    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 09:06:51.813702    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:06:51.830987    4604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.848737    4604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 09:06:51.866735    4604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.874087    4604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.878230    4604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 09:06:51.926970    4604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:06:51.943705    4604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:06:51.956247    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:06:52.006902    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:06:52.056817    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:06:52.106649    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:06:52.159409    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:06:52.206463    4604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:06:52.251679    4604 kubeadm.go:401] StartCluster: {Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:06:52.256595    4604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.289711    4604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:06:52.303076    4604 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:06:52.303076    4604 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:06:52.307600    4604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:06:52.319493    4604 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.323244    4604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
	I1213 09:06:52.375973    4604 kubeconfig.go:125] found "functional-482100" server: "https://127.0.0.1:63845"
	I1213 09:06:52.384564    4604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:06:52.400436    4604 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:49:19.464397186 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 09:06:50.708121923 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 09:06:52.400436    4604 kubeadm.go:1161] stopping kube-system containers ...
	I1213 09:06:52.404765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 09:06:52.439058    4604 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 09:06:52.463926    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:06:52.476815    4604 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 08:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:53 /etc/kubernetes/scheduler.conf
	
	I1213 09:06:52.482061    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:06:52.502735    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:06:52.519106    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.523157    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:06:52.541594    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.557952    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.562286    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:06:52.581460    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:06:52.594972    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:06:52.600191    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:06:52.618621    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:06:52.641664    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:52.896546    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.462301    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.694179    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.760215    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:06:53.817909    4604 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:06:53.824127    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.324298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:54.823616    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.323720    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:55.823860    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.324648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:56.823338    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.323932    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:57.823662    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.325441    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:58.823290    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.324178    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:06:59.823834    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.323384    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:00.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.322728    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:01.825381    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.323125    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:02.823650    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.323054    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:03.823648    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.323519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:04.822908    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.323004    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:05.823657    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.324223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:06.822603    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.322828    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:07.824194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.323166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:08.823223    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.322943    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:09.823068    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.323743    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:10.823847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.325801    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:11.823253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.323701    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:12.823566    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.323096    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:13.822920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.323236    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:14.822845    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.323202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:15.823028    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.320733    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:16.823214    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.323253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:17.823515    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.323838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:18.822838    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.323955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:19.823948    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.324026    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:20.823129    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.323245    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:21.823815    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.323343    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:22.823677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.323428    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:23.823426    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.323295    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:24.823766    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.323104    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:25.824973    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.323001    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:26.822856    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.323222    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:27.824487    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.325702    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:28.823423    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.324186    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:29.824044    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.324049    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:30.822878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.323296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:31.823313    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.322735    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:32.824301    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.324665    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:33.823915    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.323027    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:34.823403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.323680    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:35.824836    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.323334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:36.823224    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.324136    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:37.824342    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.323652    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:38.825016    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.325354    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:39.824443    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.323965    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:40.824628    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.324070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:41.824202    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.325124    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:42.823287    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.324764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:43.823938    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.323817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:44.823922    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.324123    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:45.824182    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.325015    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:46.824205    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.323091    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:47.823407    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.322847    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:48.823901    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.325349    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:49.824694    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.323496    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:50.824112    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.323585    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:51.825519    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.323663    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:52.824612    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.324473    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:53.823636    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:53.968254    4604 logs.go:282] 0 containers: []
	W1213 09:07:53.968254    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:53.971723    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:54.005821    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.005868    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:54.009997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:54.043633    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.043633    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:54.047702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:54.077692    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.077692    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:54.081464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:54.109644    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.109644    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:54.113266    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:54.141926    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.141926    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:54.145352    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:54.178100    4604 logs.go:282] 0 containers: []
	W1213 09:07:54.178100    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:54.178100    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:54.178164    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:54.252196    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:54.252196    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:54.284935    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:54.285971    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:54.538213    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:54.529451   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.530614   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.531692   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.532968   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:54.534319   23686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:54.538213    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:54.538213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:54.583090    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:54.583090    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:07:57.312809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:07:57.335927    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:07:57.368850    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.368850    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:07:57.372314    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:07:57.414423    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.414423    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:07:57.418091    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:07:57.445624    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.445624    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:07:57.450351    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:07:57.478804    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.478804    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:07:57.482347    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:07:57.515270    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.515270    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:07:57.519226    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:07:57.550203    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.550203    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:07:57.553796    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:07:57.581350    4604 logs.go:282] 0 containers: []
	W1213 09:07:57.581350    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:07:57.581350    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:07:57.581350    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:07:57.643200    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:07:57.643200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:07:57.673988    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:07:57.673988    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:07:57.760392    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:07:57.746772   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.747806   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.748611   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.753804   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:07:57.755158   23846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:07:57.760392    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:07:57.760392    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:07:57.802849    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:07:57.802849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:00.359379    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:00.382695    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:00.413789    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.413789    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:00.417939    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:00.446378    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.446378    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:00.449613    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:00.482176    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.482176    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:00.485918    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:00.515814    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.515814    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:00.519425    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:00.550561    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.550614    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:00.554312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:00.581925    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.582019    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:00.586945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:00.614309    4604 logs.go:282] 0 containers: []
	W1213 09:08:00.614309    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:00.614309    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:00.614309    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:00.677303    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:00.677303    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:00.708357    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:00.708388    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:00.792820    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:00.783680   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.784993   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.786265   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.787013   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:00.789215   24008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:00.792820    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:00.792820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:00.834035    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:00.834035    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:03.387456    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:03.409689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:03.440566    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.440566    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:03.446132    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:03.481808    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.481808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:03.484917    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:03.516053    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.516053    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:03.519249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:03.549448    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.549448    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:03.553206    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:03.580932    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.580932    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:03.585400    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:03.615096    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.615096    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:03.618691    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:03.650537    4604 logs.go:282] 0 containers: []
	W1213 09:08:03.650537    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:03.650537    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:03.650537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:03.715560    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:03.715560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:03.745557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:03.745557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:03.830341    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:03.818412   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.819378   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.820920   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.822091   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:03.823691   24157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:03.830341    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:03.830341    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:03.873599    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:03.873599    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.430406    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:06.454482    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:06.484232    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.484232    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:06.489209    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:06.519685    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.519685    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:06.523281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:06.552228    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.552228    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:06.556002    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:06.585247    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.585301    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:06.588771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:06.616709    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.616709    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:06.622086    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:06.649957    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.649957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:06.653592    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:06.684273    4604 logs.go:282] 0 containers: []
	W1213 09:08:06.684273    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:06.684273    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:06.684273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:06.712577    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:06.712577    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:06.795376    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:06.784575   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.785371   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.786679   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.787911   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:06.789050   24302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:06.795376    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:06.795898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:06.839065    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:06.839065    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:06.889079    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:06.889079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.455581    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:09.480052    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:09.512625    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.512625    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:09.516455    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:09.542431    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.542499    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:09.547418    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:09.577381    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.577381    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:09.581054    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:09.609734    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.609809    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:09.614960    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:09.640858    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.640858    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:09.644539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:09.673297    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.673324    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:09.676963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:09.706066    4604 logs.go:282] 0 containers: []
	W1213 09:08:09.706097    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:09.706097    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:09.706097    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:09.770379    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:09.770379    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:09.800715    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:09.800715    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:09.888345    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:09.874561   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.876116   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.878447   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.880145   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:09.881085   24459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:09.888366    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:09.888366    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:09.931503    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:09.931503    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:12.488194    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:12.511945    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:12.543092    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.543092    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:12.546813    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:12.575244    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.575244    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:12.579183    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:12.606211    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.606211    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:12.609921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:12.638793    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.638793    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:12.642301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:12.671214    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.671250    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:12.675013    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:12.704218    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.704218    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:12.708216    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:12.738811    4604 logs.go:282] 0 containers: []
	W1213 09:08:12.738811    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:12.738811    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:12.738811    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:12.801161    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:12.801161    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:12.830060    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:12.831060    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:12.915147    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:12.903878   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.904809   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.906430   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.907805   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:12.908973   24612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:12.915147    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:12.915147    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:12.956625    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:12.956625    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:15.510904    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:15.533124    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:15.562214    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.562214    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:15.565621    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:15.590955    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.591009    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:15.594833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:15.624408    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.624408    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:15.628727    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:15.659837    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.659837    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:15.663513    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:15.690393    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.690393    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:15.693797    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:15.724206    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.724206    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:15.730221    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:15.758038    4604 logs.go:282] 0 containers: []
	W1213 09:08:15.758038    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:15.758038    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:15.758038    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:15.820934    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:15.820934    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:15.851382    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:15.851382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:15.931108    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:15.919902   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.921621   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.922751   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.924650   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:15.925746   24760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:15.931108    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:15.931108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:15.972073    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:15.972073    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:18.529296    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:18.551856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:18.582603    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.582603    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:18.586131    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:18.615914    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.615914    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:18.619071    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:18.647226    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.647314    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:18.650885    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:18.677834    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.677834    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:18.681465    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:18.710780    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.710819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:18.715047    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:18.742085    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.742085    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:18.746505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:18.773319    4604 logs.go:282] 0 containers: []
	W1213 09:08:18.773319    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:18.773319    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:18.773374    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:18.837290    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:18.837290    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:18.866989    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:18.866989    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:18.948930    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:18.936159   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.939732   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.940602   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.942440   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:18.944294   24911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:18.948930    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:18.948930    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:18.991657    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:18.991657    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:21.549759    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:21.572464    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:21.600790    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.600818    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:21.604078    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:21.633799    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.633799    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:21.637744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:21.665485    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.665485    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:21.669376    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:21.699844    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.699844    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:21.706394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:21.735819    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.735819    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:21.738827    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:21.766879    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.766879    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:21.770728    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:21.798832    4604 logs.go:282] 0 containers: []
	W1213 09:08:21.798867    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:21.798867    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:21.798867    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:21.863860    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:21.863860    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:21.896284    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:21.896284    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:21.976382    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:21.965807   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.966601   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.969521   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.971003   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:21.972104   25066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:21.976382    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:21.976382    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:22.019285    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:22.019285    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:24.577418    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:24.603278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:24.639919    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.639919    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:24.643610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:24.669667    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.669690    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:24.672641    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:24.702942    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.702995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:24.706810    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:24.734192    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.734192    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:24.737895    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:24.769567    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.769597    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:24.773373    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:24.803190    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.803190    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:24.807117    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:24.838064    4604 logs.go:282] 0 containers: []
	W1213 09:08:24.838064    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:24.838064    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:24.838138    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:24.901072    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:24.901072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:24.931306    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:24.931306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:25.017636    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:25.007253   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.008264   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.009244   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.011513   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:25.013011   25216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:25.017636    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:25.017636    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:25.060810    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:25.060810    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:27.623166    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:27.647045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:27.677340    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.677340    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:27.680821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:27.708576    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.708576    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:27.712514    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:27.743161    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.743161    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:27.746176    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:27.775854    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.775854    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:27.779689    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:27.808373    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.808373    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:27.814962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:27.841903    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.841903    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:27.847177    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:27.876941    4604 logs.go:282] 0 containers: []
	W1213 09:08:27.876941    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:27.876941    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:27.876941    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:27.937569    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:27.937569    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:27.967918    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:27.967918    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:28.051195    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:28.041767   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.043106   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.044618   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.045585   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:28.046883   25367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:28.051195    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:28.051195    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:28.091557    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:28.091557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:30.648207    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:30.671041    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:30.701387    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.701387    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:30.705353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:30.736395    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.736395    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:30.740850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:30.768626    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.768704    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:30.772180    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:30.799431    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.799504    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:30.803459    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:30.831305    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.831305    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:30.835828    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:30.864498    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.864498    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:30.868346    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:30.895559    4604 logs.go:282] 0 containers: []
	W1213 09:08:30.895559    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:30.895559    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:30.895559    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:30.960230    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:30.960230    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:30.989103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:30.989103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:31.064421    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:31.054673   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.055288   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.057455   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.058494   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:31.059785   25520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:31.064516    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:31.064547    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:31.104938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:31.104938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:33.662266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:33.687669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:33.719674    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.719674    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:33.723494    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:33.753735    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.753735    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:33.757660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:33.785391    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.785391    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:33.789471    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:33.817747    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.817747    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:33.821119    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:33.849606    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.849635    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:33.852624    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:33.883011    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.883011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:33.886617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:33.914695    4604 logs.go:282] 0 containers: []
	W1213 09:08:33.914695    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:33.914695    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:33.914695    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:33.977929    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:33.977929    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:34.008197    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:34.008197    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:34.087742    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:34.077994   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.079234   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.080710   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.081989   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:34.083395   25669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:34.087742    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:34.087742    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:34.130894    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:34.130894    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:36.687878    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:36.710647    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:36.741923    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.741956    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:36.745908    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:36.773011    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.773011    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:36.777059    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:36.806949    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.806949    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:36.811294    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:36.839274    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.839274    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:36.843833    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:36.871615    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.871615    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:36.875410    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:36.904496    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.904496    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:36.908270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:36.937747    4604 logs.go:282] 0 containers: []
	W1213 09:08:36.937747    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:36.937747    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:36.937747    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:37.017981    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:37.005392   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.010112   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.011449   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.012674   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:37.013720   25810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:37.017981    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:37.018025    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:37.058111    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:37.058111    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:37.112070    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:37.112070    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:37.178407    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:37.178407    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.714817    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:39.735622    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:39.767408    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.767408    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:39.771362    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:39.800883    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.800883    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:39.805233    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:39.833400    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.833400    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:39.837009    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:39.864328    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.864373    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:39.868165    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:39.895992    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.895992    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:39.899539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:39.926222    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.926294    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:39.929312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:39.957665    4604 logs.go:282] 0 containers: []
	W1213 09:08:39.957738    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:39.957738    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:39.957738    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:39.986966    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:39.986966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:40.066305    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:40.055341   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.056045   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.058442   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.059663   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:40.060820   25967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:40.066357    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:40.066357    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:40.109785    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:40.109785    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:40.157108    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:40.157134    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:42.726706    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:42.752650    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:42.783377    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.783401    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:42.786899    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:42.817139    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.817212    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:42.820862    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:42.847197    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.847268    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:42.850420    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:42.880094    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.880094    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:42.884146    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:42.913168    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.913168    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:42.916601    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:42.945059    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.945059    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:42.950263    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:42.978582    4604 logs.go:282] 0 containers: []
	W1213 09:08:42.978603    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:42.978603    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:42.978603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:43.041879    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:43.041879    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:43.072317    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:43.072317    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:43.165917    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:43.155759   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.156841   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.158782   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160038   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:43.160953   26118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:43.165917    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:43.165917    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:43.207209    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:43.207209    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:45.761070    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:45.783759    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:45.815346    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.815346    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:45.819219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:45.846414    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.846414    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:45.849850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:45.881303    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.881303    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:45.885203    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:45.911758    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.911758    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:45.915687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:45.946589    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.946589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:45.950051    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:45.976088    4604 logs.go:282] 0 containers: []
	W1213 09:08:45.976088    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:45.979669    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:46.011063    4604 logs.go:282] 0 containers: []
	W1213 09:08:46.011155    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:46.011155    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:46.011155    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:46.074019    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:46.075019    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:46.106619    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:46.106619    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:46.188897    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:46.178478   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.179482   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.180684   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.181950   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:46.183541   26268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:46.188897    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:46.188897    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:46.229995    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:46.229995    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:48.789468    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:48.811354    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:48.842470    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.842470    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:48.848670    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:48.876329    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.876329    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:48.879989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:48.908565    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.908565    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:48.912255    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:48.948072    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.948072    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:48.951857    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:48.980030    4604 logs.go:282] 0 containers: []
	W1213 09:08:48.980030    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:48.983447    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:49.016239    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.016239    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:49.022258    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:49.049950    4604 logs.go:282] 0 containers: []
	W1213 09:08:49.049950    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:49.049950    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:49.049950    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:49.094252    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:49.094252    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:49.146427    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:49.146952    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:49.205850    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:49.205850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:49.235850    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:49.235850    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:49.315580    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:49.305530   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.308706   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.309996   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.311283   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:49.312405   26435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:51.820920    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:51.843200    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:51.874270    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.874322    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:51.877687    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:51.905886    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.905886    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:51.910483    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:51.937921    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.938207    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:51.942126    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:51.970152    4604 logs.go:282] 0 containers: []
	W1213 09:08:51.970152    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:51.973777    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:52.005341    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.005341    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:52.011533    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:52.042004    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.042004    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:52.045665    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:52.073964    4604 logs.go:282] 0 containers: []
	W1213 09:08:52.073964    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:52.073964    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:52.073964    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:52.136324    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:52.137327    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:52.167493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:52.167493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:52.247700    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:52.239213   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.240590   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.241695   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.242537   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:52.243658   26566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:52.247700    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:52.247700    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:52.289002    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:52.289002    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:54.844809    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:54.866930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:54.898229    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.898229    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:54.902031    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:54.932712    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.932712    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:54.936121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:54.963632    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.963632    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:54.967503    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:54.993576    4604 logs.go:282] 0 containers: []
	W1213 09:08:54.993576    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:54.997842    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:55.025663    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.025663    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:55.029428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:55.057141    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.057141    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:55.061017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:55.089820    4604 logs.go:282] 0 containers: []
	W1213 09:08:55.089820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:55.089820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:55.089820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:55.153977    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:55.154001    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:55.215966    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:55.215966    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:55.244751    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:55.244751    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:55.322925    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:55.313352   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.314042   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.317002   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318221   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:55.318785   26733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:08:55.322925    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:55.322925    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:57.870018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:08:57.892445    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:08:57.923189    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.923189    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:08:57.926680    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:08:57.956979    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.956979    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:08:57.960468    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:08:57.989714    4604 logs.go:282] 0 containers: []
	W1213 09:08:57.989714    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:08:57.994672    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:08:58.021349    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.021349    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:08:58.024912    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:08:58.053594    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.053594    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:08:58.057186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:08:58.086247    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.086247    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:08:58.089444    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:08:58.117375    4604 logs.go:282] 0 containers: []
	W1213 09:08:58.117375    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:08:58.117375    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:08:58.117375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:08:58.159414    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:08:58.159414    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:08:58.213441    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:08:58.213441    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:08:58.275646    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:08:58.275646    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:08:58.307733    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:08:58.307733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:08:58.393941    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:08:58.383096   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.384651   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.385333   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.388769   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:58.389485   26883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:00.900693    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:00.925586    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:00.954130    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.954130    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:00.957383    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:00.984796    4604 logs.go:282] 0 containers: []
	W1213 09:09:00.984826    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:00.988339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:01.013943    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.013943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:01.017466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:01.045614    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.045614    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:01.049219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:01.077719    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.077719    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:01.083105    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:01.114373    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.114373    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:01.118034    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:01.145171    4604 logs.go:282] 0 containers: []
	W1213 09:09:01.145171    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:01.145171    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:01.145171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:01.227391    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:01.216889   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.217760   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220024   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.220903   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:01.223053   27008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:01.227391    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:01.227391    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:01.266324    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:01.266324    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:01.318698    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:01.318698    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:01.379640    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:01.379640    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:03.917253    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:03.941711    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:03.969911    4604 logs.go:282] 0 containers: []
	W1213 09:09:03.969911    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:03.973403    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:04.002458    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.002458    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:04.006090    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:04.034145    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.034145    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:04.037736    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:04.063991    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.063991    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:04.066963    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:04.096807    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.096807    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:04.100249    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:04.128437    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.128437    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:04.132074    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:04.160225    4604 logs.go:282] 0 containers: []
	W1213 09:09:04.160225    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:04.160225    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:04.160225    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:04.222581    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:04.222581    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:04.251920    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:04.251920    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:04.333622    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:04.320010   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.321197   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.326586   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.327493   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:04.329574   27162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:04.333622    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:04.333622    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:04.373214    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:04.373214    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:06.935527    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:06.958474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:06.990564    4604 logs.go:282] 0 containers: []
	W1213 09:09:06.990564    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:06.994406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:07.025506    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.025506    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:07.029905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:07.060066    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.060066    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:07.063610    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:07.091922    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.092007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:07.095595    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:07.124460    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.124496    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:07.128147    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:07.157131    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.157131    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:07.160743    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:07.191500    4604 logs.go:282] 0 containers: []
	W1213 09:09:07.191500    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:07.191500    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:07.191500    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:07.242194    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:07.242273    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:07.302067    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:07.302067    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:07.333088    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:07.333088    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:07.415000    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:07.401947   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.407518   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.408692   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.409598   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:07.411816   27328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:07.415000    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:07.415000    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:09.963522    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:09.986505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:10.023010    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.023010    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:10.026202    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:10.057866    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.057945    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:10.061802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:10.089523    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.089523    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:10.092989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:10.124941    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.124941    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:10.128882    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:10.157336    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.157336    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:10.160838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:10.186957    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.186957    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:10.190881    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:10.219557    4604 logs.go:282] 0 containers: []
	W1213 09:09:10.219557    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:10.219557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:10.219557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:10.298159    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:10.289746   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.290828   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.291834   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.292960   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:10.294167   27456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:10.298159    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:10.298159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:10.338779    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:10.338779    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:10.385337    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:10.385337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:10.445911    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:10.445911    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:12.983669    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:13.005971    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:13.038383    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.038383    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:13.041755    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:13.071860    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.071860    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:13.075101    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:13.104117    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.104198    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:13.107582    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:13.137511    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.137511    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:13.142951    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:13.170239    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.170239    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:13.174246    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:13.204251    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.204251    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:13.207747    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:13.235835    4604 logs.go:282] 0 containers: []
	W1213 09:09:13.235835    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:13.235835    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:13.235835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:13.299873    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:13.300878    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:13.331103    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:13.331103    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:13.409680    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:13.398624   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.400704   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.401513   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.404845   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:13.405788   27610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:13.409714    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:13.409714    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:13.454882    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:13.454882    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.009703    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:16.033721    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:16.065430    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.065430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:16.069567    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:16.096385    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.096459    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:16.099989    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:16.127782    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.127782    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:16.130994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:16.161401    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.161401    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:16.165139    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:16.193589    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.193589    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:16.197319    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:16.226572    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.226607    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:16.230538    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:16.257820    4604 logs.go:282] 0 containers: []
	W1213 09:09:16.257820    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:16.257820    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:16.257820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:16.308467    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:16.308467    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:16.371370    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:16.371370    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:16.400835    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:16.400835    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:16.485671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:16.475989   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477022   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.477650   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.480077   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:16.481064   27776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:16.485701    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:16.485701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.036505    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:19.061114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:19.095852    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.095852    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:19.099353    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:19.131781    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.131781    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:19.134812    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:19.165823    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.165823    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:19.169019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:19.198392    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.198392    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:19.203290    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:19.233051    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.233051    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:19.237259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:19.263869    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.263869    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:19.268019    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:19.296220    4604 logs.go:282] 0 containers: []
	W1213 09:09:19.296220    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:19.296220    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:19.296220    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:19.359981    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:19.359981    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:19.391692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:19.391692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:19.476176    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:19.465489   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.466623   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.468158   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.469971   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:19.470922   27912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:19.476176    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:19.476176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:19.518567    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:19.518567    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:22.072334    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:22.095545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:22.126659    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.126690    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:22.130501    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:22.160329    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.160363    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:22.164108    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:22.193702    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.193732    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:22.196904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:22.225415    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.225415    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:22.228719    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:22.258896    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.258896    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:22.262806    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:22.289609    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.289609    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:22.293253    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:22.323681    4604 logs.go:282] 0 containers: []
	W1213 09:09:22.323681    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:22.323681    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:22.323681    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:22.386923    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:22.386923    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:22.416353    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:22.416353    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:22.498735    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:22.491314   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.492386   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.493575   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.494571   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:22.495595   28058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:22.498735    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:22.498735    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:22.550754    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:22.550754    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.111955    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:25.134114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:25.160988    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.160988    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:25.164339    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:25.195249    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.195249    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:25.198638    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:25.225490    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.225490    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:25.231098    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:25.257691    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.257691    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:25.261515    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:25.287683    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.287683    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:25.293213    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:25.319319    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.319319    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:25.322958    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:25.354108    4604 logs.go:282] 0 containers: []
	W1213 09:09:25.354108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:25.354198    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:25.354198    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:25.397011    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:25.397011    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:25.455292    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:25.455292    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:25.517423    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:25.517423    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:25.546322    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:25.546322    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:25.627826    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:25.618808   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.619718   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.621380   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.623002   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:25.624181   28240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.133991    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:28.156525    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:28.184733    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.184733    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:28.188704    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:28.216710    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.216710    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:28.220744    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:28.249082    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.249082    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:28.252646    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:28.284289    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.284289    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:28.288332    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:28.314796    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.314796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:28.321406    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:28.350295    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.350295    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:28.353850    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:28.382048    4604 logs.go:282] 0 containers: []
	W1213 09:09:28.382048    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:28.382048    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:28.382048    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:28.444457    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:28.444457    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:28.475310    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:28.475337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:28.562628    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:28.551828   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.553431   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.555792   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.558400   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:28.559403   28371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:28.562628    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:28.562628    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:28.605307    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:28.605307    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:31.165266    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:31.186966    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:31.222005    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.222066    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:31.225186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:31.256308    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.256308    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:31.260088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:31.287293    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.287293    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:31.290982    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:31.319241    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.319241    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:31.322581    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:31.350058    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.350128    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:31.353584    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:31.380173    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.380212    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:31.384070    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:31.411239    4604 logs.go:282] 0 containers: []
	W1213 09:09:31.411239    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:31.411239    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:31.411239    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:31.477283    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:31.477283    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:31.507500    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:31.508020    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:31.597314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:31.584543   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.585344   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.588383   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.589783   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:31.590653   28527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:31.597314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:31.597314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:31.635938    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:31.635938    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.189996    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:34.212398    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:34.238809    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.238809    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:34.242256    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:34.270112    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.270112    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:34.273875    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:34.303456    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.303456    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:34.307522    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:34.338016    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.338016    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:34.341872    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:34.368952    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.368952    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:34.374198    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:34.405261    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.405261    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:34.408381    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:34.435072    4604 logs.go:282] 0 containers: []
	W1213 09:09:34.435072    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:34.435072    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:34.435072    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:34.515381    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:34.502247   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.503068   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508040   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.508918   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:34.510099   28663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:34.515381    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:34.515381    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:34.573241    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:34.573241    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:34.623650    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:34.624178    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:34.682935    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:34.682935    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:37.219569    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:37.242545    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:37.272082    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.272082    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:37.275835    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:37.304181    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.304181    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:37.307884    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:37.335943    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.335943    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:37.339864    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:37.377566    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.377566    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:37.382018    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:37.412404    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.412404    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:37.416038    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:37.442722    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.442722    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:37.446771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:37.474398    4604 logs.go:282] 0 containers: []
	W1213 09:09:37.474398    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:37.474398    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:37.474398    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:37.577898    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:37.567137   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.567518   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.570136   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.571337   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:37.572686   28821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:37.577898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:37.577898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:37.620560    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:37.620560    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:37.669632    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:37.669632    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:37.734142    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:37.734142    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.271884    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:40.294824    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:40.321888    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.321888    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:40.325505    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:40.353723    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.353808    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:40.357193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:40.386522    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.386522    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:40.391186    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:40.418547    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.418547    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:40.425278    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:40.455783    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.455783    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:40.459890    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:40.489966    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.489966    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:40.493703    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:40.538181    4604 logs.go:282] 0 containers: []
	W1213 09:09:40.538181    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:40.538253    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:40.538253    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:40.601826    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:40.601826    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:40.631898    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:40.631898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:40.713071    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:40.701224   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.701842   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.706275   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.707428   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:40.708512   28980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:40.713071    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:40.713071    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:40.755270    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:40.755270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.309018    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:43.331107    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:43.365765    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.365765    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:43.369683    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:43.396582    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.396582    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:43.400512    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:43.429185    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.429185    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:43.432708    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:43.463128    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.463128    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:43.466133    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:43.496082    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.496082    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:43.500151    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:43.537578    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.537578    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:43.541441    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:43.569477    4604 logs.go:282] 0 containers: []
	W1213 09:09:43.569477    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:43.569477    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:43.569521    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:43.620575    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:43.620575    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:43.681515    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:43.681515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:43.710447    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:43.710447    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:43.793119    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:43.783406   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.784625   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.785703   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.786648   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:43.787996   29143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:43.793119    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:43.793119    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.339779    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:46.362296    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:46.391878    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.391878    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:46.395830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:46.424203    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.424203    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:46.427838    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:46.456024    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.456024    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:46.460057    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:46.488187    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.488187    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:46.493831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:46.533872    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.533872    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:46.540390    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:46.568011    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.568011    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:46.571702    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:46.602586    4604 logs.go:282] 0 containers: []
	W1213 09:09:46.602653    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:46.602653    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:46.602653    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:46.662280    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:46.662280    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:46.693557    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:46.693557    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:46.782210    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:46.770755   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.771672   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774093   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.774970   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:46.777140   29279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:46.782210    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:46.782210    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:46.823701    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:46.823701    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.384298    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:49.407707    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:49.438420    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.438420    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:49.442231    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:49.470770    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.470770    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:49.473919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:49.504515    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.504546    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:49.508487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:49.547082    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.547082    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:49.551548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:49.578796    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.578796    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:49.582281    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:49.608530    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.608530    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:49.611741    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:49.639231    4604 logs.go:282] 0 containers: []
	W1213 09:09:49.639231    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:49.639231    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:49.639231    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:49.689389    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:49.689389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:49.753229    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:49.753229    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:49.783294    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:49.783294    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:49.864270    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:49.854364   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.855305   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.858106   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.859177   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:49.860391   29444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:49.864270    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:49.864270    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.412975    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:52.439979    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:52.475193    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.475193    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:52.479114    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:52.510741    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.510741    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:52.514487    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:52.557360    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.557360    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:52.561448    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:52.588077    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.588077    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:52.591539    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:52.621182    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.621182    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:52.624734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:52.650838    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.650838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:52.655565    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:52.686451    4604 logs.go:282] 0 containers: []
	W1213 09:09:52.686451    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:52.686451    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:52.686528    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:52.747788    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:52.747788    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:52.781834    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:52.782825    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:52.860287    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:52.851144   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.852167   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.853303   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.854413   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:52.855634   29582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:52.860362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:52.860362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:52.905051    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:52.905051    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.461925    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:55.484035    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:55.517116    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.517116    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:55.522844    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:55.553488    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.553488    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:55.557370    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:55.589995    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.589995    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:55.595259    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:55.622638    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.622707    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:55.626066    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:55.652752    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.652752    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:55.657065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:55.685386    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.685407    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:55.689428    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:55.717051    4604 logs.go:282] 0 containers: []
	W1213 09:09:55.717051    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:55.717051    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:55.717120    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:55.758337    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:55.758337    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:09:55.822375    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:55.822375    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:55.885080    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:55.885080    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:55.917741    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:55.917741    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:55.995300    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:55.984347   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985357   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.985896   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.988781   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:55.989544   29760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.500574    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:09:58.521337    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:09:58.548629    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.548629    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:09:58.551546    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:09:58.581100    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.581100    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:09:58.586220    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:09:58.613906    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.613906    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:09:58.617469    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:09:58.644238    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.644292    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:09:58.648344    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:09:58.678031    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.678031    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:09:58.681474    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:09:58.707025    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.707025    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:09:58.710542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:09:58.742746    4604 logs.go:282] 0 containers: []
	W1213 09:09:58.742770    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:09:58.742770    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:09:58.742770    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:09:58.805849    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:09:58.805849    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:09:58.837389    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:09:58.837389    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:09:58.917868    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:09:58.906816   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.907692   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.911842   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.913304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:09:58.914304   29896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:09:58.917899    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:09:58.917899    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:09:58.959951    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:09:58.959951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:01.514466    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:01.535932    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:01.567037    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.567037    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:01.571145    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:01.595775    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.595775    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:01.599771    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:01.629170    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.629170    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:01.632128    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:01.662382    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.662382    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:01.665517    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:01.693368    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.693368    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:01.696830    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:01.724611    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.724611    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:01.728207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:01.755432    4604 logs.go:282] 0 containers: []
	W1213 09:10:01.755432    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:01.755432    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:01.755432    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:01.821399    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:01.821399    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:01.852579    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:01.853099    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:01.934160    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:01.923250   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.924109   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.926861   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.928007   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:01.929279   30043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:01.934160    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:01.934160    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:01.976648    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:01.976648    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:04.534486    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:04.556301    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:04.587516    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.587516    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:04.591921    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:04.621299    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.621371    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:04.625334    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:04.653954    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.653954    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:04.657436    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:04.686845    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.686845    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:04.690201    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:04.718702    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.718702    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:04.722366    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:04.750970    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.750970    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:04.754283    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:04.783682    4604 logs.go:282] 0 containers: []
	W1213 09:10:04.783682    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:04.783682    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:04.783682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:04.844699    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:04.844699    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:04.875813    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:04.875813    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:04.953200    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:04.941991   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.942942   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.946691   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.947838   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:04.948867   30194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:04.953200    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:04.953200    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:04.993306    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:04.993306    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:07.543188    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:07.566411    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:07.596022    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.596022    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:07.599737    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:07.627899    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.627899    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:07.631860    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:07.661281    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.661281    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:07.665185    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:07.695914    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.695914    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:07.699555    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:07.732011    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.732058    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:07.736521    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:07.769602    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.769602    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:07.773486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:07.802107    4604 logs.go:282] 0 containers: []
	W1213 09:10:07.802107    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:07.802107    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:07.802107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:07.864516    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:07.864516    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:07.896513    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:07.896513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:07.973085    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:07.961966   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.962932   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.964132   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.966225   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:07.967235   30341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:07.973085    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:07.973085    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:08.014869    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:08.014869    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:10.570544    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:10.592396    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:10.624974    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.624974    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:10.629502    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:10.657201    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.657201    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:10.660591    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:10.687563    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.687563    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:10.691289    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:10.721420    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.721420    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:10.724919    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:10.752211    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.752211    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:10.755905    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:10.784215    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.784215    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:10.788207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:10.816951    4604 logs.go:282] 0 containers: []
	W1213 09:10:10.816951    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:10.816951    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:10.816951    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:10.879172    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:10.879172    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:10.908202    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:10.908202    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:10.986325    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:10.976268   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.977455   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.978475   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.979601   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:10.980602   30491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:10.986325    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:10.986325    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:11.027515    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:11.027515    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:13.588427    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:13.611368    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:13.644873    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.644873    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:13.648808    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:13.677881    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.677942    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:13.682617    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:13.712870    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.712870    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:13.716696    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:13.744007    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.744007    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:13.748548    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:13.777967    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.778011    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:13.781321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:13.809271    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.809271    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:13.813285    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:13.840555    4604 logs.go:282] 0 containers: []
	W1213 09:10:13.840555    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:13.840555    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:13.840555    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:13.904251    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:13.904251    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:13.935133    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:13.935133    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:14.016449    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:14.005177   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.005946   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.009264   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.010040   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:14.012104   30636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:14.016449    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:14.016449    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:14.057706    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:14.057706    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:16.615756    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:16.638088    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:16.670041    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.670041    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:16.673924    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:16.704163    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.704163    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:16.710097    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:16.740700    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.740700    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:16.744219    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:16.771219    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.771219    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:16.774904    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:16.804658    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.804658    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:16.808110    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:16.837026    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.837026    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:16.840957    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:16.869149    4604 logs.go:282] 0 containers: []
	W1213 09:10:16.869149    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:16.869149    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:16.869149    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:16.933545    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:16.933545    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:16.964296    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:16.964296    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:17.040603    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:17.030769   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.031886   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.032780   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.035115   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:17.036189   30785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:17.040603    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:17.040603    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:17.083647    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:17.083647    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.650764    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:19.674143    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:19.702643    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.702643    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:19.707045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:19.734166    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.734166    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:19.738121    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:19.767856    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.767856    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:19.771207    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:19.801742    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.801819    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:19.805222    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:19.833321    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.833321    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:19.836856    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:19.863434    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.863465    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:19.867234    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:19.897054    4604 logs.go:282] 0 containers: []
	W1213 09:10:19.897054    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:19.897054    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:19.897054    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:19.946805    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:19.946805    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:20.007213    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:20.007213    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:20.036248    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:20.036248    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:20.114272    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:20.104527   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.106024   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.107052   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.108958   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:20.109919   30950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:20.114272    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:20.114272    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:22.659210    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:22.681874    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:22.711856    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.711856    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:22.715662    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:22.744003    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.744003    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:22.748080    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:22.778409    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.778409    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:22.781997    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:22.809533    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.809557    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:22.812700    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:22.842593    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.842593    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:22.846788    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:22.874683    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.874683    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:22.878045    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:22.906027    4604 logs.go:282] 0 containers: []
	W1213 09:10:22.906027    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:22.906088    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:22.906107    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:22.970513    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:22.970513    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:23.000755    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:23.000755    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:23.084733    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:23.075283   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.076072   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.077826   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.078971   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:23.080011   31086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:23.084733    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:23.084733    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:23.127257    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:23.127257    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.686782    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:25.709380    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:25.738484    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.738484    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:25.742065    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:25.770152    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.770152    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:25.774113    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:25.803290    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.803290    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:25.807361    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:25.834734    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.834734    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:25.838734    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:25.865666    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.865666    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:25.869046    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:25.896838    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.896838    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:25.900312    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:25.930732    4604 logs.go:282] 0 containers: []
	W1213 09:10:25.930732    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:25.930732    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:25.930732    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:25.980958    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:25.980958    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:26.041855    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:26.041855    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:26.073493    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:26.073493    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:26.159584    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:26.149576   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.150693   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.151667   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.154327   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:26.156130   31246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:26.159584    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:26.159584    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:28.707550    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:28.729858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:28.759846    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.759846    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:28.763596    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:28.794012    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.794012    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:28.797789    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:28.826515    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.826515    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:28.829640    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:28.861520    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.861520    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:28.864944    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:28.893275    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.893303    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:28.896907    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:28.923381    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.923381    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:28.928293    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:28.960491    4604 logs.go:282] 0 containers: []
	W1213 09:10:28.960491    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:28.960491    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:28.960491    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:29.022787    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:29.022787    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:29.053784    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:29.053784    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:29.136856    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:29.125258   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.127477   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.129454   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.131359   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:29.132312   31380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:29.136898    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:29.136898    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:29.179176    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:29.179176    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:31.733518    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:31.756802    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:31.790216    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.790216    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:31.793805    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:31.824397    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.824397    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:31.829526    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:31.857889    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.857889    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:31.861193    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:31.890304    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.890304    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:31.893795    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:31.921856    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.921927    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:31.924962    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:31.953806    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.953837    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:31.957466    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:31.987829    4604 logs.go:282] 0 containers: []
	W1213 09:10:31.987829    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:31.987829    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:31.987829    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:32.034063    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:32.034063    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:32.096079    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:32.096079    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:32.126955    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:32.126955    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:32.209100    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:32.196897   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.197915   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.198712   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.202032   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:32.203735   31542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:32.209100    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:32.209100    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:34.755896    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:34.779017    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:34.808294    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.808366    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:34.811869    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:34.839872    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.839938    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:34.843685    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:34.871636    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.871636    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:34.875660    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:34.903443    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.903443    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:34.907770    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:34.935581    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.935581    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:34.939767    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:34.969814    4604 logs.go:282] 0 containers: []
	W1213 09:10:34.969814    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:34.973317    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:35.003474    4604 logs.go:282] 0 containers: []
	W1213 09:10:35.003474    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:35.003474    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:35.003537    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:35.066261    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:35.066261    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:35.097692    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:35.097692    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:35.180207    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:35.168999   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.170587   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.172028   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.173692   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:35.175343   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:35.180207    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:35.180207    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:35.223159    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:35.223159    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:37.780314    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:37.804001    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:37.835430    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.835430    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:37.839042    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:37.867680    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.867699    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:37.870898    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:37.902798    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.902798    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:37.906542    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:37.934985    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.935050    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:37.938192    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:37.969111    4604 logs.go:282] 0 containers: []
	W1213 09:10:37.969111    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:37.972848    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:38.002751    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.002751    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:38.006552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:38.035033    4604 logs.go:282] 0 containers: []
	W1213 09:10:38.035033    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:38.035033    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:38.035033    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:38.086087    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:38.086611    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:38.147832    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:38.147832    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:38.180233    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:38.180233    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:38.261008    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:38.249120   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.250220   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.251345   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.252453   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:38.253654   31840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:38.261008    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:38.261008    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:40.811191    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:40.833394    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:40.865083    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.865083    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:40.868858    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:40.900204    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.900204    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:40.903500    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:40.930103    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.930103    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:40.933495    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:40.960744    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.960744    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:40.964475    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:40.990935    4604 logs.go:282] 0 containers: []
	W1213 09:10:40.990935    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:40.995048    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:41.022706    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.022706    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:41.026451    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:41.056906    4604 logs.go:282] 0 containers: []
	W1213 09:10:41.056906    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:41.056906    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:41.056906    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:41.115470    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:41.115470    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:41.143967    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:41.143967    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:41.232682    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:41.221185   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.222351   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.225465   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.226707   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.227919   31975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:41.232682    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:41.232682    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:41.274641    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:41.274641    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:43.828677    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:43.852994    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:43.886713    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.886713    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:43.890625    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:43.919501    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.919501    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:43.923426    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:43.951987    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.951987    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:43.955937    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:43.985130    4604 logs.go:282] 0 containers: []
	W1213 09:10:43.985130    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:43.988484    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:44.018258    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.018258    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:44.022302    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:44.050666    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.050666    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:44.054876    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:44.085108    4604 logs.go:282] 0 containers: []
	W1213 09:10:44.085108    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:44.085108    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:44.085108    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:44.112809    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:44.112809    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:44.193362    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:44.181849   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.183015   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.186504   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.187951   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:44.188991   32122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:44.193362    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:44.193362    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:44.237334    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:44.237334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:44.289034    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:44.289034    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:46.855055    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:46.878443    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:46.909614    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.909614    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:46.916327    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:46.944603    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.944603    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:46.948050    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:46.976487    4604 logs.go:282] 0 containers: []
	W1213 09:10:46.976487    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:46.980498    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:47.008131    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.008131    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:47.011552    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:47.039887    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.039887    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:47.043570    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:47.072161    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.072161    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:47.075765    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:47.105843    4604 logs.go:282] 0 containers: []
	W1213 09:10:47.105843    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:47.105843    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:47.105843    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:47.168444    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:47.168444    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:47.198734    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:47.198734    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:47.280671    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:47.269605   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.270521   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.272646   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.273887   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:47.274821   32286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:47.280671    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:47.280671    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:47.322808    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:47.322808    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:49.882724    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:49.904378    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:49.936667    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.936667    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:49.939740    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:49.973628    4604 logs.go:282] 0 containers: []
	W1213 09:10:49.973628    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:49.977831    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:50.008373    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.008452    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:50.013016    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:50.043104    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.043104    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:50.046855    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:50.078353    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.078353    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:50.082270    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:50.113856    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.113856    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:50.118930    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:50.148208    4604 logs.go:282] 0 containers: []
	W1213 09:10:50.148208    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:50.148208    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:50.148208    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:50.214697    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:50.214697    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:50.243820    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:50.243820    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:50.331549    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:50.320817   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.321835   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.324796   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.325911   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:50.326959   32440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:50.331549    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:50.331549    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:50.372171    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:50.372171    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:52.928403    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:52.950923    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 09:10:52.979279    4604 logs.go:282] 0 containers: []
	W1213 09:10:52.979307    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:10:52.982821    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 09:10:53.012984    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.013051    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:10:53.016321    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 09:10:53.046839    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.046839    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:10:53.051164    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 09:10:53.080161    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.080161    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:10:53.083793    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 09:10:53.117152    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.117152    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:10:53.120486    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 09:10:53.150543    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.150543    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:10:53.154171    4604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 09:10:53.184334    4604 logs.go:282] 0 containers: []
	W1213 09:10:53.184334    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:10:53.184334    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:10:53.184334    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:10:53.228630    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:10:53.228630    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:10:53.282521    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:10:53.282558    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:10:53.346952    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:10:53.346991    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:10:53.373976    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:10:53.373976    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:10:53.455812    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:10:53.445139   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.446098   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.447357   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.448734   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:53.450762   32605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:10:55.961126    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:10:55.980524    4604 kubeadm.go:602] duration metric: took 4m3.6754433s to restartPrimaryControlPlane
	W1213 09:10:55.980524    4604 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 09:10:55.985356    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:10:56.635426    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:56.658380    4604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:10:56.677797    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:10:56.682473    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:10:56.699107    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:10:56.699107    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:10:56.703291    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:10:56.719044    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:10:56.723277    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:10:56.742780    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:10:56.756514    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:10:56.760505    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:10:56.780196    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.793888    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:10:56.798332    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:10:56.817764    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:10:56.829936    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:10:56.833707    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:10:56.849696    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:10:56.965661    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:10:57.051298    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:10:57.163109    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:14:58.077510    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:14:58.077510    4604 kubeadm.go:319] 
	I1213 09:14:58.077700    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:14:58.082513    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:14:58.082513    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:14:58.083105    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:14:58.083105    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:14:58.083105    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:14:58.083630    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:14:58.083660    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:14:58.084184    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:14:58.084411    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:14:58.084511    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:14:58.084637    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:14:58.084788    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:14:58.084950    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:14:58.085041    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:14:58.085561    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:14:58.085629    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:14:58.085787    4604 kubeadm.go:319] OS: Linux
	I1213 09:14:58.085905    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:14:58.085994    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:14:58.086095    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:14:58.086249    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:14:58.086375    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:14:58.086436    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:14:58.086559    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:14:58.086680    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:14:58.086776    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:14:58.087006    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:14:58.087282    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:14:58.087282    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:14:58.091333    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:14:58.091333    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:14:58.091861    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:14:58.091931    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:14:58.091931    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:14:58.092898    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:14:58.092898    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:14:58.096150    4604 out.go:252]   - Booting up control plane ...
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:14:58.096150    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:14:58.097140    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:14:58.097140    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00081318s
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.097140    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:14:58.097140    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:14:58.097140    4604 kubeadm.go:319] 
	I1213 09:14:58.098169    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:14:58.098169    4604 kubeadm.go:319] 
	W1213 09:14:58.098169    4604 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00081318s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:14:58.103247    4604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 09:14:58.557280    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:14:58.576227    4604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:14:58.580590    4604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:14:58.591916    4604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:14:58.591916    4604 kubeadm.go:158] found existing configuration files:
	
	I1213 09:14:58.597377    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:14:58.611245    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:14:58.615321    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:14:58.633996    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:14:58.647865    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:14:58.651889    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:14:58.669442    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.682787    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:14:58.687832    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:14:58.708348    4604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:14:58.722058    4604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:14:58.727337    4604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:14:58.747003    4604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:14:58.861078    4604 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 09:14:58.943511    4604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:14:59.043878    4604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:18:59.702905    4604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:18:59.702984    4604 kubeadm.go:319] 
	I1213 09:18:59.703100    4604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:18:59.706956    4604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:18:59.706956    4604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:18:59.708169    4604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:18:59.708169    4604 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 09:18:59.708169    4604 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 09:18:59.708812    4604 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_INET: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 09:18:59.708880    4604 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 09:18:59.709865    4604 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 09:18:59.710067    4604 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 09:18:59.710115    4604 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 09:18:59.710268    4604 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 09:18:59.710360    4604 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 09:18:59.710543    4604 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 09:18:59.710612    4604 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 09:18:59.710694    4604 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 09:18:59.710783    4604 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] OS: Linux
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:18:59.710876    4604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:18:59.711409    4604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:18:59.711492    4604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:18:59.711623    4604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:18:59.711691    4604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:18:59.711874    4604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:18:59.712056    4604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:18:59.712280    4604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:18:59.712416    4604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:18:59.717830    4604 out.go:252]   - Generating certificates and keys ...
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:18:59.717830    4604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:18:59.718841    4604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:18:59.718841    4604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:18:59.718841    4604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:18:59.722958    4604 out.go:252]   - Booting up control plane ...
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:18:59.722958    4604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:18:59.723960    4604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:18:59.723960    4604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:18:59.724966    4604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001708609s
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.724966    4604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:18:59.724966    4604 kubeadm.go:319] 
	I1213 09:18:59.725960    4604 kubeadm.go:403] duration metric: took 12m7.4678993s to StartCluster
	I1213 09:18:59.725960    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:18:59.729959    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:18:59.791539    4604 cri.go:89] found id: ""
	I1213 09:18:59.791620    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.791620    4604 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:18:59.791620    4604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 09:18:59.796126    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:18:59.838188    4604 cri.go:89] found id: ""
	I1213 09:18:59.838188    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.838188    4604 logs.go:284] No container was found matching "etcd"
	I1213 09:18:59.838188    4604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 09:18:59.842219    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:18:59.886873    4604 cri.go:89] found id: ""
	I1213 09:18:59.886928    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.886928    4604 logs.go:284] No container was found matching "coredns"
	I1213 09:18:59.886959    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:18:59.891184    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:18:59.935247    4604 cri.go:89] found id: ""
	I1213 09:18:59.935247    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.935247    4604 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:18:59.935247    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:18:59.940658    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:18:59.979678    4604 cri.go:89] found id: ""
	I1213 09:18:59.979678    4604 logs.go:282] 0 containers: []
	W1213 09:18:59.979678    4604 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:18:59.979678    4604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:18:59.984360    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:19:00.029429    4604 cri.go:89] found id: ""
	I1213 09:19:00.029429    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.029429    4604 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:19:00.029429    4604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 09:19:00.034206    4604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:19:00.078417    4604 cri.go:89] found id: ""
	I1213 09:19:00.078417    4604 logs.go:282] 0 containers: []
	W1213 09:19:00.078417    4604 logs.go:284] No container was found matching "kindnet"
	I1213 09:19:00.078417    4604 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:19:00.078417    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:19:00.158314    4604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:19:00.149922   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.150826   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.153483   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.154798   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:19:00.155843   40589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:19:00.158314    4604 logs.go:123] Gathering logs for Docker ...
	I1213 09:19:00.158314    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 09:19:00.200907    4604 logs.go:123] Gathering logs for container status ...
	I1213 09:19:00.201904    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:19:00.251291    4604 logs.go:123] Gathering logs for kubelet ...
	I1213 09:19:00.251291    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:19:00.314330    4604 logs.go:123] Gathering logs for dmesg ...
	I1213 09:19:00.314330    4604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 09:19:00.346177    4604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:19:00.346280    4604 out.go:285] * 
	W1213 09:19:00.346392    4604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.346427    4604 out.go:285] * 
	W1213 09:19:00.348597    4604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:19:00.354189    4604 out.go:203] 
	W1213 09:19:00.361975    4604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001708609s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:19:00.362101    4604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:19:00.362101    4604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:19:00.368166    4604 out.go:203] 
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:20:48.141961   43208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:20:48.142948   43208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:20:48.144098   43208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:20:48.145027   43208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:20:48.146469   43208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:20:48 up 56 min,  0 user,  load average: 0.85, 0.46, 0.47
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:20:45 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:45 functional-482100 kubelet[43022]: E1213 09:20:45.190432   43022 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:20:45 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:20:45 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:20:45 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 13 09:20:45 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:45 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:46 functional-482100 kubelet[43052]: E1213 09:20:46.003275   43052 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:20:46 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:20:46 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:20:46 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 13 09:20:46 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:46 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:46 functional-482100 kubelet[43079]: E1213 09:20:46.721478   43079 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:20:46 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:20:46 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:20:47 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 13 09:20:47 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:47 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:47 functional-482100 kubelet[43105]: E1213 09:20:47.499837   43105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:20:47 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:20:47 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:20:48 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 13 09:20:48 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:20:48 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (583.0773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-482100 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-482100 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (93.8245ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:63845/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-482100 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-482100 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-482100 describe po hello-node-connect: exit status 1 (50.3591399s)

                                                
                                                
** stderr ** 
	E1213 09:20:33.309199    8008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:20:43.396050    8008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:20:53.439135    8008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:03.477336    8008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:13.521979    8008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-482100 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-482100 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-482100 logs -l app=hello-node-connect: exit status 1 (40.2904337s)

                                                
                                                
** stderr ** 
	E1213 09:21:23.658585    9204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:33.738503    9204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:43.775875    9204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:53.817717    9204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-482100 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-482100 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-482100 describe svc hello-node-connect: exit status 1 (29.3446474s)

                                                
                                                
** stderr ** 
	E1213 09:22:03.943096    7544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:14.036311    7544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-482100 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (596.0333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.0399455s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start      │ -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ start      │ -p functional-482100 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-482100 --alsologtostderr -v=1                                                                                            │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/2968.pem                                                                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /usr/share/ca-certificates/2968.pem                                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/29682.pem                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /usr/share/ca-certificates/29682.pem                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ docker-env │ functional-482100 docker-env                                                                                                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/test/nested/copy/2968/hosts                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ ssh        │ functional-482100 ssh sudo systemctl is-active crio                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │                     │
	│ license    │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image save kicbase/echo-server:functional-482100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image rm kicbase/echo-server:functional-482100 --alsologtostderr                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image      │ functional-482100 image save --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:20:51
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:20:51.429239    8152 out.go:360] Setting OutFile to fd 1760 ...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.482128    8152 out.go:374] Setting ErrFile to fd 1944...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.495159    8152 out.go:368] Setting JSON to false
	I1213 09:20:51.497157    8152 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3458,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:51.497157    8152 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:51.501159    8152 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:51.504155    8152 notify.go:221] Checking for updates...
	I1213 09:20:51.506155    8152 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:51.508155    8152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:51.511156    8152 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:51.513157    8152 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:51.515157    8152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:51.518157    8152 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:51.519157    8152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:51.637790    8152 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:51.641441    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:51.870640    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:51.852535876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:51.873468    8152 out.go:179] * Using the docker driver based on existing profile
	I1213 09:20:51.879058    8152 start.go:309] selected driver: docker
	I1213 09:20:51.879058    8152 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:51.879058    8152 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:51.885141    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:52.120780    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:52.103897698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:52.155676    8152 cni.go:84] Creating CNI manager for ""
	I1213 09:20:52.155676    8152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:20:52.156342    8152 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:52.160138    8152 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:22:24.748202   45622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:24.749406   45622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:24.750481   45622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:24.751745   45622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:24.752847   45622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:22:24 up 58 min,  0 user,  load average: 0.38, 0.39, 0.44
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:22:21 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:21 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 589.
	Dec 13 09:22:21 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:21 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:21 functional-482100 kubelet[45464]: E1213 09:22:21.960790   45464 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:21 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:21 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:22 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 590.
	Dec 13 09:22:22 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:22 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:22 functional-482100 kubelet[45477]: E1213 09:22:22.719020   45477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:22 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:22 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:23 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 591.
	Dec 13 09:22:23 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:23 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:23 functional-482100 kubelet[45488]: E1213 09:22:23.506403   45488 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:23 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:23 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 592.
	Dec 13 09:22:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:24 functional-482100 kubelet[45517]: E1213 09:22:24.236777   45517 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:24 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:24 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (574.0855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:63845/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (582.3762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (579.4365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.0365886s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-482100 ssh sudo cat /etc/test/nested/copy/2968/hosts                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ ssh            │ functional-482100 ssh sudo systemctl is-active crio                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │                     │
	│ license        │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image save kicbase/echo-server:functional-482100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image rm kicbase/echo-server:functional-482100 --alsologtostderr                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image save --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls --format short --alsologtostderr                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls --format yaml --alsologtostderr                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ ssh            │ functional-482100 ssh pgrep buildkitd                                                                                                                     │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │                     │
	│ image          │ functional-482100 image ls --format json --alsologtostderr                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image build -t localhost/my-image:functional-482100 testdata\build --alsologtostderr                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls --format table --alsologtostderr                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:20:51
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:20:51.429239    8152 out.go:360] Setting OutFile to fd 1760 ...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.482128    8152 out.go:374] Setting ErrFile to fd 1944...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.495159    8152 out.go:368] Setting JSON to false
	I1213 09:20:51.497157    8152 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3458,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:51.497157    8152 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:51.501159    8152 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:51.504155    8152 notify.go:221] Checking for updates...
	I1213 09:20:51.506155    8152 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:51.508155    8152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:51.511156    8152 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:51.513157    8152 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:51.515157    8152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:51.518157    8152 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:51.519157    8152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:51.637790    8152 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:51.641441    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:51.870640    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:51.852535876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:51.873468    8152 out.go:179] * Using the docker driver based on existing profile
	I1213 09:20:51.879058    8152 start.go:309] selected driver: docker
	I1213 09:20:51.879058    8152 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:51.879058    8152 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:51.885141    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:52.120780    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:52.103897698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:52.155676    8152 cni.go:84] Creating CNI manager for ""
	I1213 09:20:52.155676    8152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:20:52.156342    8152 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:52.160138    8152 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:22:33 functional-482100 dockerd[21650]: time="2025-12-13T09:22:33.674169751Z" level=info msg="sbJoin: gwep4 ''->'4b00b7c48a7e', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:24:24.692919   48323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:24:24.694266   48323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:24:24.695491   48323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:24:24.697110   48323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:24:24.698260   48323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:24:24 up  1:00,  0 user,  load average: 0.31, 0.37, 0.43
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:24:21 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:24:21 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 749.
	Dec 13 09:24:21 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:21 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:21 functional-482100 kubelet[48147]: E1213 09:24:21.964402   48147 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:24:21 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:24:21 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:24:22 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 750.
	Dec 13 09:24:22 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:22 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:22 functional-482100 kubelet[48159]: E1213 09:24:22.709123   48159 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:24:22 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:24:22 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:24:23 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 751.
	Dec 13 09:24:23 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:23 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:23 functional-482100 kubelet[48187]: E1213 09:24:23.459081   48187 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:24:23 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:24:23 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:24:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 752.
	Dec 13 09:24:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:24:24 functional-482100 kubelet[48214]: E1213 09:24:24.231624   48214 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:24:24 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:24:24 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (582.1942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-482100 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-482100 replace --force -f testdata\mysql.yaml: exit status 1 (20.2128014s)

                                                
                                                
** stderr ** 
	E1213 09:21:14.599053   14136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:21:24.682930   14136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:63845/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:63845/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-482100 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (578.6965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.0355829s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                                                 ARGS                                                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp         │ functional-482100 cp functional-482100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp315632686\001\cp-test.txt │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ addons     │ functional-482100 addons list                                                                                                                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ addons     │ functional-482100 addons list -o json                                                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh -n functional-482100 sudo cat /home/docker/cp-test.txt                                                                                                                         │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ cp         │ functional-482100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                            │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh -n functional-482100 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ service    │ functional-482100 service list                                                                                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service    │ functional-482100 service list -o json                                                                                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service    │ functional-482100 service --namespace=default --https --url hello-node                                                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service    │ functional-482100 service hello-node --url --format={{.IP}}                                                                                                                                          │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ service    │ functional-482100 service hello-node --url                                                                                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ start      │ -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ start      │ -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ start      │ -p functional-482100 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                            │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-482100 --alsologtostderr -v=1                                                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │                     │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/2968.pem                                                                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /usr/share/ca-certificates/2968.pem                                                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/29682.pem                                                                                                                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /usr/share/ca-certificates/29682.pem                                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ docker-env │ functional-482100 docker-env                                                                                                                                                                         │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh        │ functional-482100 ssh sudo cat /etc/test/nested/copy/2968/hosts                                                                                                                                      │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ ssh        │ functional-482100 ssh sudo systemctl is-active crio                                                                                                                                                  │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │                     │
	│ license    │                                                                                                                                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	└────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:20:51
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:20:51.429239    8152 out.go:360] Setting OutFile to fd 1760 ...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.482128    8152 out.go:374] Setting ErrFile to fd 1944...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.495159    8152 out.go:368] Setting JSON to false
	I1213 09:20:51.497157    8152 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3458,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:51.497157    8152 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:51.501159    8152 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:51.504155    8152 notify.go:221] Checking for updates...
	I1213 09:20:51.506155    8152 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:51.508155    8152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:51.511156    8152 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:51.513157    8152 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:51.515157    8152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:51.518157    8152 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:51.519157    8152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:51.637790    8152 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:51.641441    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:51.870640    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:51.852535876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:51.873468    8152 out.go:179] * Using the docker driver based on existing profile
	I1213 09:20:51.879058    8152 start.go:309] selected driver: docker
	I1213 09:20:51.879058    8152 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:51.879058    8152 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:51.885141    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:52.120780    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:52.103897698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:52.155676    8152 cni.go:84] Creating CNI manager for ""
	I1213 09:20:52.155676    8152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:20:52.156342    8152 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:52.160138    8152 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:21:26.240472   44247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:21:26.241778   44247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:21:26.242841   44247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:21:26.243905   44247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:21:26.245073   44247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:21:26 up 57 min,  0 user,  load average: 0.47, 0.41, 0.45
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:21:22 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:21:23 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 13 09:21:23 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:23 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:23 functional-482100 kubelet[44084]: E1213 09:21:23.467063   44084 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:21:23 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:21:23 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 13 09:21:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:24 functional-482100 kubelet[44096]: E1213 09:21:24.226702   44096 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 513.
	Dec 13 09:21:24 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:24 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:24 functional-482100 kubelet[44108]: E1213 09:21:24.991605   44108 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:21:24 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:21:25 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 514.
	Dec 13 09:21:25 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:25 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:21:25 functional-482100 kubelet[44136]: E1213 09:21:25.736090   44136 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:21:25 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:21:25 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (598.3764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-482100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-482100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.3397807s)

                                                
                                                
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-482100 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1213 09:21:51.216514    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:01.302882    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:11.347033    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:21.387947    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	E1213 09:22:31.424472    1612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:63845/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-482100
helpers_test.go:244: (dbg) docker inspect functional-482100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa",
	        "Created": "2025-12-13T08:49:07.27080474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:49:07.556748749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/hosts",
	        "LogPath": "/var/lib/docker/containers/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa/688ac19b44037eb88c62cf417f6f174dea828e6974e39e4494b7decb5b4f2eaa-json.log",
	        "Name": "/functional-482100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-482100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-482100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaa2b7455ac07b2ff1e001d5025dd2842b0c468e87ec1549e1a93f8d03650d91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-482100",
	                "Source": "/var/lib/docker/volumes/functional-482100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-482100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-482100",
	                "name.minikube.sigs.k8s.io": "functional-482100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0846ee7b9ca8cb54809a7d685cd1bf9a4ebcad80c4fa7d3ad64c01e27d0c8bc4",
	            "SandboxKey": "/var/run/docker/netns/0846ee7b9ca8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63841"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63842"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63844"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-482100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "88ce21d6cbdebdf878313475255fe0fbc85957ab9cf1fa33630b61bbbfd2061c",
	                    "EndpointID": "88d9584a7fae8c35f7938fb422a7bed2f8ec5a3db15bd02c0d2459ed9f8f0e4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-482100",
	                        "688ac19b4403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-482100 -n functional-482100: exit status 2 (583.8437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs -n 25: (1.0359969s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ docker-env     │ functional-482100 docker-env                                                                                                                              │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:20 UTC │ 13 Dec 25 09:20 UTC │
	│ ssh            │ functional-482100 ssh sudo cat /etc/test/nested/copy/2968/hosts                                                                                           │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ ssh            │ functional-482100 ssh sudo systemctl is-active crio                                                                                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │                     │
	│ license        │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image save kicbase/echo-server:functional-482100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image rm kicbase/echo-server:functional-482100 --alsologtostderr                                                                        │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image ls                                                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ image          │ functional-482100 image save --daemon kicbase/echo-server:functional-482100 --alsologtostderr                                                             │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:21 UTC │ 13 Dec 25 09:21 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ update-context │ functional-482100 update-context --alsologtostderr -v=2                                                                                                   │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls --format short --alsologtostderr                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image ls --format yaml --alsologtostderr                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ ssh            │ functional-482100 ssh pgrep buildkitd                                                                                                                     │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │                     │
	│ image          │ functional-482100 image ls --format json --alsologtostderr                                                                                                │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	│ image          │ functional-482100 image build -t localhost/my-image:functional-482100 testdata\build --alsologtostderr                                                    │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │                     │
	│ image          │ functional-482100 image ls --format table --alsologtostderr                                                                                               │ functional-482100 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 09:22 UTC │ 13 Dec 25 09:22 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:20:51
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:20:51.429239    8152 out.go:360] Setting OutFile to fd 1760 ...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.482128    8152 out.go:374] Setting ErrFile to fd 1944...
	I1213 09:20:51.482128    8152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:51.495159    8152 out.go:368] Setting JSON to false
	I1213 09:20:51.497157    8152 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3458,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:51.497157    8152 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:51.501159    8152 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:51.504155    8152 notify.go:221] Checking for updates...
	I1213 09:20:51.506155    8152 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:51.508155    8152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:51.511156    8152 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:51.513157    8152 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:51.515157    8152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:51.518157    8152 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:51.519157    8152 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:51.637790    8152 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:51.641441    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:51.870640    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:51.852535876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:51.873468    8152 out.go:179] * Using the docker driver based on existing profile
	I1213 09:20:51.879058    8152 start.go:309] selected driver: docker
	I1213 09:20:51.879058    8152 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:51.879058    8152 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:51.885141    8152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:52.120780    8152 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:52.103897698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:52.155676    8152 cni.go:84] Creating CNI manager for ""
	I1213 09:20:52.155676    8152 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 09:20:52.156342    8152 start.go:353] cluster config:
	{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:52.160138    8152 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829030467Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829036768Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829059870Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.829091672Z" level=info msg="Initializing buildkit"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.942041157Z" level=info msg="Completed buildkit initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947761286Z" level=info msg="Daemon has completed initialization"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.947947300Z" level=info msg="API listen on [::]:2376"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948053208Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 09:06:48 functional-482100 dockerd[21650]: time="2025-12-13T09:06:48.948082310Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 09:06:48 functional-482100 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 13 09:06:49 functional-482100 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 13 09:06:49 functional-482100 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 09:06:49 functional-482100 cri-dockerd[21979]: time="2025-12-13T09:06:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 09:06:49 functional-482100 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:22:32.991166   46312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:32.991988   46312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:32.994543   46312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:32.995673   46312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:22:32.996809   46312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001010] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001341] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001210] FS:  0000000000000000 GS:  0000000000000000
	[Dec13 09:06] CPU: 10 PID: 66098 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7fb64675ab20
	[  +0.000442] Code: Unable to access opcode bytes at RIP 0x7fb64675aaf6.
	[  +0.000680] RSP: 002b:00007ffe69215830 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000780] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000798] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000796] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000835] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000824] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.885911] CPU: 0 PID: 66226 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000821] RIP: 0033:0x7f29b6797b20
	[  +0.000390] Code: Unable to access opcode bytes at RIP 0x7f29b6797af6.
	[  +0.000688] RSP: 002b:00007fff1d5027b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000799] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000781] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000770] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000791] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001021] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001388] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 09:22:33 up 58 min,  0 user,  load average: 0.56, 0.43, 0.45
	Linux functional-482100 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:22:29 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 600.
	Dec 13 09:22:30 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:30 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:30 functional-482100 kubelet[46046]: E1213 09:22:30.232725   46046 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 601.
	Dec 13 09:22:30 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:30 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:30 functional-482100 kubelet[46153]: E1213 09:22:30.974621   46153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:30 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:31 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 602.
	Dec 13 09:22:31 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:31 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:31 functional-482100 kubelet[46177]: E1213 09:22:31.730984   46177 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:31 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:31 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:22:32 functional-482100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 603.
	Dec 13 09:22:32 functional-482100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:32 functional-482100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:22:32 functional-482100 kubelet[46207]: E1213 09:22:32.481511   46207 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:22:32 functional-482100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:22:32 functional-482100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-482100 -n functional-482100: exit status 2 (639.9578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-482100" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 09:20:21.442941    4284 out.go:360] Setting OutFile to fd 916 ...
I1213 09:20:21.564827    4284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:20:21.564827    4284 out.go:374] Setting ErrFile to fd 1104...
I1213 09:20:21.564827    4284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:20:21.577256    4284 mustload.go:66] Loading cluster: functional-482100
I1213 09:20:21.578484    4284 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:20:21.588532    4284 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:20:21.641528    4284 host.go:66] Checking if "functional-482100" exists ...
I1213 09:20:21.645531    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-482100
I1213 09:20:21.703527    4284 api_server.go:166] Checking apiserver status ...
I1213 09:20:21.708543    4284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 09:20:21.712531    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:20:21.769533    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
W1213 09:20:21.901951    4284 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 09:20:21.905138    4284 out.go:179] * The control-plane node functional-482100 apiserver is not running: (state=Stopped)
I1213 09:20:21.909509    4284 out.go:179]   To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
stdout: * The control-plane node functional-482100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-482100"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 10292: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] stdout:
* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-482100"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-482100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-482100 apply -f testdata\testsvc.yaml: exit status 1 (20.192709s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:63845/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-482100 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-482100 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-482100 create deployment hello-node --image kicbase/echo-server: exit status 1 (93.2712ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:63845/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-482100 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 service list: exit status 103 (470.8867ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-482100 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-482100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-482100\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 service list -o json: exit status 103 (483.0034ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-482100 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 service --namespace=default --https --url hello-node: exit status 103 (497.7387ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-482100 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url --format={{.IP}}: exit status 103 (490.2455ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-482100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-482100\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url: exit status 103 (478.4087ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-482100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-482100"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-482100 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-482100 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-482100"
functional_test.go:1579: failed to parse "* The control-plane node functional-482100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-482100\"": parse "* The control-plane node functional-482100 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-482100\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-482100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-482100"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-482100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-482100": exit status 1 (2.8623542s)

                                                
                                                
-- stdout --
	functional-482100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (846.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-481200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-481200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (48.8851382s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-481200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-481200: (12.3140555s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-481200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-481200 status --format={{.Host}}: exit status 7 (204.2192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-481200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-481200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m43.3438868s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-481200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-481200" primary control-plane node in "kubernetes-upgrade-481200" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:07:25.096648    1468 out.go:360] Setting OutFile to fd 1724 ...
	I1213 10:07:25.143593    1468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:07:25.143593    1468 out.go:374] Setting ErrFile to fd 1856...
	I1213 10:07:25.143593    1468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:07:25.159700    1468 out.go:368] Setting JSON to false
	I1213 10:07:25.163852    1468 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6252,"bootTime":1765614192,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:07:25.163852    1468 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:07:25.167692    1468 out.go:179] * [kubernetes-upgrade-481200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:07:25.171174    1468 notify.go:221] Checking for updates...
	I1213 10:07:25.173182    1468 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:07:25.176204    1468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:07:25.178298    1468 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:07:25.180503    1468 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:07:25.183827    1468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:07:25.186322    1468 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1213 10:07:25.187570    1468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:07:25.299486    1468 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:07:25.302611    1468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:07:25.529442    1468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:96 SystemTime:2025-12-13 10:07:25.509916094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:07:25.533197    1468 out.go:179] * Using the docker driver based on existing profile
	I1213 10:07:25.535217    1468 start.go:309] selected driver: docker
	I1213 10:07:25.535217    1468 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-481200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-481200 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:07:25.535217    1468 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:07:25.628427    1468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:07:25.891922    1468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:92 SystemTime:2025-12-13 10:07:25.873026736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:07:25.892790    1468 cni.go:84] Creating CNI manager for ""
	I1213 10:07:25.892829    1468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:07:25.892829    1468 start.go:353] cluster config:
	{Name:kubernetes-upgrade-481200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-481200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:07:25.895660    1468 out.go:179] * Starting "kubernetes-upgrade-481200" primary control-plane node in "kubernetes-upgrade-481200" cluster
	I1213 10:07:25.899192    1468 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:07:25.900761    1468 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:07:25.904539    1468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:07:25.904539    1468 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:07:25.904539    1468 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 10:07:25.904539    1468 cache.go:65] Caching tarball of preloaded images
	I1213 10:07:25.904539    1468 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:07:25.905541    1468 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 10:07:25.905541    1468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\config.json ...
	I1213 10:07:25.993181    1468 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:07:25.993250    1468 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:07:25.993250    1468 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:07:25.993250    1468 start.go:360] acquireMachinesLock for kubernetes-upgrade-481200: {Name:mk66c7bcef800d3231eb2cbc64a987c8b202f357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:07:25.993250    1468 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubernetes-upgrade-481200"
	I1213 10:07:25.993250    1468 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:07:25.993250    1468 fix.go:54] fixHost starting: 
	I1213 10:07:26.000309    1468 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-481200 --format={{.State.Status}}
	I1213 10:07:26.058415    1468 fix.go:112] recreateIfNeeded on kubernetes-upgrade-481200: state=Stopped err=<nil>
	W1213 10:07:26.058415    1468 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:07:26.062148    1468 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-481200" ...
	I1213 10:07:26.065818    1468 cli_runner.go:164] Run: docker start kubernetes-upgrade-481200
	I1213 10:07:26.880664    1468 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-481200 --format={{.State.Status}}
	I1213 10:07:26.934689    1468 kic.go:430] container "kubernetes-upgrade-481200" state is running.
	I1213 10:07:26.939691    1468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-481200
	I1213 10:07:26.992665    1468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\config.json ...
	I1213 10:07:26.993670    1468 machine.go:94] provisionDockerMachine start ...
	I1213 10:07:26.998677    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:27.054674    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:27.054674    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:27.054674    1468 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:07:27.056677    1468 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:07:30.242243    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-481200
	
	I1213 10:07:30.242243    1468 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-481200"
	I1213 10:07:30.246472    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:30.300319    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:30.300724    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:30.300724    1468 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-481200 && echo "kubernetes-upgrade-481200" | sudo tee /etc/hostname
	I1213 10:07:30.600784    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-481200
	
	I1213 10:07:30.605306    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:30.662737    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:30.662737    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:30.662737    1468 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-481200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-481200/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-481200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:07:30.843552    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:07:30.843552    1468 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:07:30.843552    1468 ubuntu.go:190] setting up certificates
	I1213 10:07:30.843552    1468 provision.go:84] configureAuth start
	I1213 10:07:30.847855    1468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-481200
	I1213 10:07:30.899926    1468 provision.go:143] copyHostCerts
	I1213 10:07:30.900090    1468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:07:30.900090    1468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:07:30.900090    1468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:07:30.901581    1468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:07:30.901610    1468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:07:30.901631    1468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:07:30.902223    1468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:07:30.902223    1468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:07:30.902756    1468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:07:30.903049    1468 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-481200 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-481200 localhost minikube]
	I1213 10:07:30.970193    1468 provision.go:177] copyRemoteCerts
	I1213 10:07:30.974375    1468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:07:30.977720    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:31.031685    1468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-481200\id_rsa Username:docker}
	I1213 10:07:31.157204    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:07:31.187578    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 10:07:31.215731    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:07:31.246759    1468 provision.go:87] duration metric: took 403.2012ms to configureAuth
	I1213 10:07:31.246759    1468 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:07:31.247330    1468 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:07:31.250945    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:31.310850    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:31.311417    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:31.311475    1468 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:07:31.479846    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:07:31.479846    1468 ubuntu.go:71] root file system type: overlay
	I1213 10:07:31.479846    1468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:07:31.483839    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:31.534832    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:31.535833    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:31.535833    1468 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:07:31.725904    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:07:31.730022    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:31.788337    1468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:07:31.788337    1468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52495 <nil> <nil>}
	I1213 10:07:31.788337    1468 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:07:31.975921    1468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:07:31.975921    1468 machine.go:97] duration metric: took 4.9821834s to provisionDockerMachine
	I1213 10:07:31.975921    1468 start.go:293] postStartSetup for "kubernetes-upgrade-481200" (driver="docker")
	I1213 10:07:31.975921    1468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:07:31.980907    1468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:07:31.983913    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:32.033918    1468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-481200\id_rsa Username:docker}
	I1213 10:07:32.163665    1468 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:07:32.171420    1468 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:07:32.171420    1468 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:07:32.171420    1468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:07:32.171420    1468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:07:32.172921    1468 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:07:32.177938    1468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:07:32.190840    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:07:32.227391    1468 start.go:296] duration metric: took 251.4666ms for postStartSetup
	I1213 10:07:32.232390    1468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:07:32.237389    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:32.287384    1468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-481200\id_rsa Username:docker}
	I1213 10:07:32.409310    1468 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:07:32.418309    1468 fix.go:56] duration metric: took 6.4249715s for fixHost
	I1213 10:07:32.418309    1468 start.go:83] releasing machines lock for "kubernetes-upgrade-481200", held for 6.4249715s
	I1213 10:07:32.422608    1468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-481200
	I1213 10:07:32.478363    1468 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:07:32.483031    1468 ssh_runner.go:195] Run: cat /version.json
	I1213 10:07:32.484092    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:32.486463    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:32.538547    1468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-481200\id_rsa Username:docker}
	I1213 10:07:32.557540    1468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52495 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-481200\id_rsa Username:docker}
	W1213 10:07:32.658944    1468 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:07:32.693703    1468 ssh_runner.go:195] Run: systemctl --version
	I1213 10:07:32.706182    1468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:07:32.715191    1468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:07:32.718188    1468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:07:32.735465    1468 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:07:32.735465    1468 start.go:496] detecting cgroup driver to use...
	I1213 10:07:32.735465    1468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:07:32.735465    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:07:32.756066    1468 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:07:32.756066    1468 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:07:32.766075    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:07:32.785471    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:07:32.801477    1468 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:07:32.806597    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:07:32.832303    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:07:32.853295    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:07:32.873160    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:07:32.891966    1468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:07:32.908972    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:07:32.927213    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:07:32.948801    1468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:07:32.968767    1468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:07:32.988201    1468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:07:33.008838    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:33.151324    1468 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:07:33.311278    1468 start.go:496] detecting cgroup driver to use...
	I1213 10:07:33.311278    1468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:07:33.316097    1468 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:07:33.345567    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:07:33.368164    1468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:07:33.444122    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:07:33.467078    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:07:33.486561    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:07:33.512506    1468 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:07:33.524604    1468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:07:33.538875    1468 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:07:33.565161    1468 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:07:33.711762    1468 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:07:33.866914    1468 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:07:33.866914    1468 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:07:33.896543    1468 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:07:33.919257    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:34.053504    1468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:07:42.412413    1468 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.3587952s)
	I1213 10:07:42.418411    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:07:42.446412    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:07:42.472407    1468 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 10:07:42.502412    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:07:42.526189    1468 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:07:42.679906    1468 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:07:42.864229    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:43.045542    1468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:07:43.071554    1468 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:07:43.095536    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:43.252493    1468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:07:43.403332    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:07:43.429322    1468 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:07:43.433334    1468 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:07:43.440320    1468 start.go:564] Will wait 60s for crictl version
	I1213 10:07:43.444331    1468 ssh_runner.go:195] Run: which crictl
	I1213 10:07:43.455322    1468 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:07:43.506039    1468 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:07:43.510894    1468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:07:43.559468    1468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:07:43.612871    1468 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 10:07:43.616498    1468 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-481200 dig +short host.docker.internal
	I1213 10:07:43.763615    1468 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:07:43.769789    1468 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:07:43.779621    1468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:07:43.802656    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:07:43.857726    1468 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-481200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-481200 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:07:43.858726    1468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:07:43.861723    1468 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:07:43.898849    1468 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:07:43.898849    1468 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1213 10:07:43.904549    1468 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1213 10:07:43.924792    1468 ssh_runner.go:195] Run: which lz4
	I1213 10:07:43.935362    1468 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 10:07:43.942366    1468 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 10:07:43.942366    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284622240 bytes)
	I1213 10:07:47.126140    1468 docker.go:655] duration metric: took 3.1947313s to copy over tarball
	I1213 10:07:47.130141    1468 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 10:07:55.454975    1468 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3247206s)
	I1213 10:07:55.454975    1468 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 10:07:55.505682    1468 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1213 10:07:55.520320    1468 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2660 bytes)
	I1213 10:07:57.734450    1468 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:07:57.762052    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:57.926067    1468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:07:58.987292    1468 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0612105s)
	I1213 10:07:58.991290    1468 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:07:59.035301    1468 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:07:59.035410    1468 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:07:59.035438    1468 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1213 10:07:59.035546    1468 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-481200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-481200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:07:59.038811    1468 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:07:59.132759    1468 cni.go:84] Creating CNI manager for ""
	I1213 10:07:59.132759    1468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:07:59.132759    1468 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:07:59.132759    1468 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-481200 NodeName:kubernetes-upgrade-481200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:07:59.133759    1468 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-481200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:07:59.137755    1468 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:07:59.149769    1468 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:07:59.153773    1468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:07:59.166760    1468 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I1213 10:07:59.186758    1468 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:07:59.213659    1468 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1213 10:07:59.240939    1468 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:07:59.247801    1468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:07:59.272136    1468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:07:59.442594    1468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:07:59.471052    1468 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200 for IP: 192.168.85.2
	I1213 10:07:59.471092    1468 certs.go:195] generating shared ca certs ...
	I1213 10:07:59.471133    1468 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:07:59.471761    1468 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:07:59.472075    1468 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:07:59.472664    1468 certs.go:257] generating profile certs ...
	I1213 10:07:59.473261    1468 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\client.key
	I1213 10:07:59.473261    1468 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\apiserver.key.7f75fcb8
	I1213 10:07:59.473261    1468 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\proxy-client.key
	I1213 10:07:59.474261    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:07:59.475266    1468 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:07:59.475266    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:07:59.475266    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:07:59.475266    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:07:59.475266    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:07:59.476253    1468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:07:59.477258    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:07:59.504248    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:07:59.532255    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:07:59.567246    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:07:59.595143    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 10:07:59.627094    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:07:59.655346    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:07:59.682186    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-481200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:07:59.714218    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:07:59.744364    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:07:59.777890    1468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:07:59.804314    1468 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:07:59.830126    1468 ssh_runner.go:195] Run: openssl version
	I1213 10:07:59.843408    1468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:07:59.859410    1468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:07:59.876326    1468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:07:59.884093    1468 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:07:59.887621    1468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:07:59.940997    1468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:07:59.958019    1468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:07:59.976004    1468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:07:59.994085    1468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:08:00.001673    1468 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:08:00.005688    1468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:08:00.054099    1468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:08:00.071676    1468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:08:00.087676    1468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:08:00.103676    1468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:08:00.110682    1468 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:08:00.114677    1468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:08:00.169472    1468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:08:00.187456    1468 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:08:00.198424    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:08:00.250089    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:08:00.305189    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:08:00.353751    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:08:00.404468    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:08:00.464048    1468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:08:00.520375    1468 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-481200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-481200 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:08:00.524549    1468 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:08:00.569333    1468 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:08:00.581581    1468 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:08:00.582150    1468 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:08:00.587184    1468 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:08:00.599440    1468 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:08:00.602441    1468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-481200
	I1213 10:08:00.655448    1468 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-481200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:08:00.656441    1468 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-481200" cluster setting kubeconfig missing "kubernetes-upgrade-481200" context setting]
	I1213 10:08:00.656441    1468 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:08:00.674441    1468 kapi.go:59] client config for kubernetes-upgrade-481200: &rest.Config{Host:"https://127.0.0.1:52499", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-481200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-481200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff612609080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:08:00.675451    1468 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:08:00.675451    1468 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:08:00.675451    1468 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:08:00.675451    1468 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:08:00.675451    1468 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:08:00.678440    1468 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:08:00.692442    1468 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:06:55.964966025 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:07:59.227875001 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-481200"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 10:08:00.692442    1468 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:08:00.695440    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:08:00.728871    1468 docker.go:484] Stopping containers: [4a078925b3b8 ee2fab724cef 234db481cf83 871e19efc5fa 7a3ff058f253 91beda2e5ace f33186f5b243 2008efb3b588]
	I1213 10:08:00.732361    1468 ssh_runner.go:195] Run: docker stop 4a078925b3b8 ee2fab724cef 234db481cf83 871e19efc5fa 7a3ff058f253 91beda2e5ace f33186f5b243 2008efb3b588
	I1213 10:08:00.778484    1468 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:08:00.803228    1468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:08:00.817237    1468 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 13 10:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 13 10:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 10:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 13 10:06 /etc/kubernetes/scheduler.conf
	
	I1213 10:08:00.820315    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:08:00.837231    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:08:00.856232    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:08:00.869230    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:08:00.875235    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:08:00.891232    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:08:00.903239    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:08:00.907238    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:08:00.923232    1468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:08:00.940240    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:08:01.004741    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:08:01.511445    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:08:01.764319    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:08:01.838693    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:08:01.922150    1468 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:08:01.926994    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:02.426365    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:02.926378    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.427562    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.926472    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:04.426943    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:04.927737    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:05.425917    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:05.927465    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.427048    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.927703    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:07.427075    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:07.927118    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:08.427750    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:08.927883    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.426460    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.927612    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:10.427179    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:10.927452    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:11.428384    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:11.927123    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.425912    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.927722    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:13.425722    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:13.926881    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:14.427530    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:14.927079    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.426527    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.925737    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:16.427344    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:16.927626    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:17.426692    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:17.928055    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.427143    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.927027    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:19.427328    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:19.928268    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:20.428074    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:20.925549    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.425583    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.927184    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:22.427611    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:22.927069    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:23.426899    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:23.927617    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.427257    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.926893    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:25.427317    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:25.926995    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:26.429348    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:26.927755    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.427823    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.927080    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:28.426489    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:28.927466    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:29.428103    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:29.927470    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.427848    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.927582    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:31.426910    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:31.930186    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:32.426775    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:32.926998    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.430194    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.929471    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:34.427606    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:34.926882    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:35.429691    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:35.927470    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.428311    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.926466    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:37.426839    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:37.928005    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.426765    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.927902    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:39.426666    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:39.929466    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:40.427193    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:40.928626    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.427324    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.927823    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:42.427288    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:42.927261    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:43.428912    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:43.927315    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.427384    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.928376    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:45.427404    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:45.926676    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:46.426818    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:46.927067    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.426867    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.928044    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:48.429618    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:48.928079    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:49.427952    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:49.928608    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.426734    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.927767    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:51.427874    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:51.929579    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:52.427191    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:52.931501    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.427192    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.928894    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:54.428562    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:54.927774    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:55.428628    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:55.926877    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.427471    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.929389    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:57.427833    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:57.927588    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:58.427868    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:58.927894    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.428053    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.927835    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:00.428790    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:00.929105    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:01.428385    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:01.926934    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:01.971863    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:01.976417    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:02.009666    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:02.013665    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:02.047277    1468 logs.go:282] 0 containers: []
	W1213 10:09:02.047277    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:02.051225    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:02.086793    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:02.090776    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:02.123785    1468 logs.go:282] 0 containers: []
	W1213 10:09:02.123785    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:02.127811    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:02.166995    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:02.170994    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:02.204914    1468 logs.go:282] 0 containers: []
	W1213 10:09:02.204914    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:02.207897    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:02.235902    1468 logs.go:282] 0 containers: []
	W1213 10:09:02.235902    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:02.235902    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:02.235902    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:02.281325    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:02.281325    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:02.331087    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:02.331087    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:02.362832    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:02.362832    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:02.439911    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:02.440924    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:02.476770    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:02.476853    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:02.559572    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:02.559572    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:02.559572    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:02.604416    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:02.604416    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:02.650501    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:02.650501    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:05.229438    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:05.249438    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:05.280245    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:05.283243    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:05.313250    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:05.316478    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:05.344902    1468 logs.go:282] 0 containers: []
	W1213 10:09:05.344902    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:05.350006    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:05.386753    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:05.390711    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:05.420904    1468 logs.go:282] 0 containers: []
	W1213 10:09:05.420904    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:05.423902    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:05.457037    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:05.463564    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:05.504341    1468 logs.go:282] 0 containers: []
	W1213 10:09:05.504341    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:05.508515    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:05.537708    1468 logs.go:282] 0 containers: []
	W1213 10:09:05.537708    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:05.537708    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:05.537708    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:05.580741    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:05.580741    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:05.627737    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:05.627737    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:05.676660    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:05.676660    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:05.736639    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:05.736639    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:05.842422    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:05.842422    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:05.842422    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:05.892853    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:05.892853    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:05.934515    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:05.934608    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:05.970042    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:05.970042    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:08.539316    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:08.561113    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:08.598297    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:08.603444    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:08.636183    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:08.639940    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:08.674759    1468 logs.go:282] 0 containers: []
	W1213 10:09:08.674759    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:08.680184    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:08.719604    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:08.724631    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:08.769410    1468 logs.go:282] 0 containers: []
	W1213 10:09:08.770407    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:08.774396    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:08.807536    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:08.811534    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:08.841090    1468 logs.go:282] 0 containers: []
	W1213 10:09:08.841090    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:08.845552    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:08.885184    1468 logs.go:282] 0 containers: []
	W1213 10:09:08.885184    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:08.885184    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:08.886180    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:08.931274    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:08.931274    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:08.987406    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:08.987406    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:09.053798    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:09.053798    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:09.139717    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:09.139717    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:09.139717    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:09.200735    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:09.200735    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:09.232713    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:09.232713    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:09.275051    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:09.275051    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:09.327982    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:09.327982    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:11.877147    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:11.901138    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:11.941137    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:11.944136    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:11.980145    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:11.985152    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:12.019138    1468 logs.go:282] 0 containers: []
	W1213 10:09:12.019138    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:12.022145    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:12.057062    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:12.063051    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:12.114050    1468 logs.go:282] 0 containers: []
	W1213 10:09:12.114050    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:12.120051    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:12.168052    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:12.173055    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:12.214050    1468 logs.go:282] 0 containers: []
	W1213 10:09:12.214050    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:12.220052    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:12.268050    1468 logs.go:282] 0 containers: []
	W1213 10:09:12.268050    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:12.268050    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:12.268050    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:12.331038    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:12.331038    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:12.384064    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:12.384064    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:12.419039    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:12.419039    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:12.490924    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:12.490924    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:12.556914    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:12.556914    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:12.634919    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:12.634919    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:12.737864    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:12.737864    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:12.737864    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:12.799431    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:12.800436    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:15.348909    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:15.368907    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:15.404925    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:15.407907    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:15.438922    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:15.442907    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:15.473909    1468 logs.go:282] 0 containers: []
	W1213 10:09:15.473909    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:15.479924    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:15.524913    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:15.529917    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:15.564919    1468 logs.go:282] 0 containers: []
	W1213 10:09:15.564919    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:15.567908    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:15.605913    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:15.608908    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:15.638920    1468 logs.go:282] 0 containers: []
	W1213 10:09:15.638920    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:15.642928    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:15.681917    1468 logs.go:282] 0 containers: []
	W1213 10:09:15.681917    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:15.681917    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:15.681917    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:15.720926    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:15.721923    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:15.777934    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:15.777934    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:15.814924    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:15.814924    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:15.886038    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:15.886038    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:15.975736    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:15.976746    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:15.976746    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:16.027741    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:16.027741    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:16.081288    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:16.081288    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:16.125615    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:16.125615    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:18.696017    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:18.719861    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:18.753462    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:18.758504    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:18.789194    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:18.792192    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:18.824207    1468 logs.go:282] 0 containers: []
	W1213 10:09:18.824207    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:18.828205    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:18.857197    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:18.860201    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:18.897197    1468 logs.go:282] 0 containers: []
	W1213 10:09:18.897197    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:18.900206    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:18.930209    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:18.933207    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:18.964903    1468 logs.go:282] 0 containers: []
	W1213 10:09:18.964903    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:18.968519    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:19.000452    1468 logs.go:282] 0 containers: []
	W1213 10:09:19.000452    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:19.000452    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:19.000452    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:19.053169    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:19.053169    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:19.109890    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:19.109890    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:19.152261    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:19.152261    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:19.208054    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:19.208107    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:19.273177    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:19.273177    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:19.323796    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:19.323796    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:19.409801    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:19.409801    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:19.409801    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:19.457808    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:19.457808    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:22.014998    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:22.111479    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:22.189381    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:22.197783    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:22.290513    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:22.296136    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:22.365306    1468 logs.go:282] 0 containers: []
	W1213 10:09:22.365306    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:22.372754    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:22.426780    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:22.431360    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:22.489825    1468 logs.go:282] 0 containers: []
	W1213 10:09:22.489825    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:22.499605    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:22.548551    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:22.555563    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:22.606551    1468 logs.go:282] 0 containers: []
	W1213 10:09:22.606551    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:22.612564    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:22.659549    1468 logs.go:282] 0 containers: []
	W1213 10:09:22.659549    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:22.659549    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:22.659549    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:22.723375    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:22.723375    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:22.853045    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:22.853045    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:22.853045    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:22.927062    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:22.927062    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:22.998132    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:22.998132    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:23.086131    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:23.087134    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:23.141997    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:23.141997    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:23.230871    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:23.230871    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:23.299518    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:23.299518    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:25.886557    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:25.913265    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:25.944255    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:25.947252    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:25.983726    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:25.988957    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:26.020683    1468 logs.go:282] 0 containers: []
	W1213 10:09:26.020683    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:26.025051    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:26.066131    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:26.069131    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:26.100136    1468 logs.go:282] 0 containers: []
	W1213 10:09:26.100136    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:26.103140    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:26.145126    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:26.148138    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:26.182131    1468 logs.go:282] 0 containers: []
	W1213 10:09:26.182131    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:26.186145    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:26.224128    1468 logs.go:282] 0 containers: []
	W1213 10:09:26.224128    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:26.224128    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:26.224128    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:26.310129    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:26.310129    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:26.353773    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:26.353773    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:26.407775    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:26.408763    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:26.459295    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:26.459295    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:26.545605    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.545605    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:26.545605    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:26.595817    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:26.595817    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:26.638215    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:26.638215    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:26.673210    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:26.673210    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.239938    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:29.259928    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:29.291783    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:29.294773    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:29.329468    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:29.332478    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:29.366769    1468 logs.go:282] 0 containers: []
	W1213 10:09:29.366769    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:29.370523    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:29.405308    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:29.409234    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:29.450836    1468 logs.go:282] 0 containers: []
	W1213 10:09:29.450836    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:29.455391    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:29.495213    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:29.498214    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:29.551866    1468 logs.go:282] 0 containers: []
	W1213 10:09:29.551936    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:29.557386    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:29.596797    1468 logs.go:282] 0 containers: []
	W1213 10:09:29.596797    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:29.596797    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:29.596797    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:29.642113    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:29.642113    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:29.678110    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:29.678110    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:29.739503    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:29.739503    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:29.786570    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:29.786570    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:29.870966    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:29.870966    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:29.870966    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:29.918953    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:29.918953    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:29.948955    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:29.948955    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:30.007179    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:30.007179    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:32.574908    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:32.596894    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:32.630904    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:32.633906    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:32.669941    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:32.675923    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:32.708222    1468 logs.go:282] 0 containers: []
	W1213 10:09:32.708222    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:32.711221    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:32.745502    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:32.748275    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:32.786011    1468 logs.go:282] 0 containers: []
	W1213 10:09:32.786011    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:32.790999    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:32.821003    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:32.823993    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:32.854007    1468 logs.go:282] 0 containers: []
	W1213 10:09:32.854007    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:32.859015    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:32.896012    1468 logs.go:282] 0 containers: []
	W1213 10:09:32.896012    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:32.896012    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:32.896012    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:32.995007    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:32.995007    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:32.995007    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:33.037996    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:33.037996    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:33.089002    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:33.089002    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:33.134002    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:33.134002    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:33.197538    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:33.197538    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:33.271253    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:33.271253    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:33.312852    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:33.312852    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:33.371143    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:33.371143    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:35.907426    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:35.927430    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:35.967514    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:35.972520    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:36.012307    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:36.016944    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:36.049845    1468 logs.go:282] 0 containers: []
	W1213 10:09:36.049845    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:36.053854    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:36.090844    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:36.093866    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:36.120844    1468 logs.go:282] 0 containers: []
	W1213 10:09:36.120844    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:36.124847    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:36.157855    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:36.162855    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:36.195853    1468 logs.go:282] 0 containers: []
	W1213 10:09:36.195853    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:36.198852    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:36.230849    1468 logs.go:282] 0 containers: []
	W1213 10:09:36.230849    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:36.230849    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:36.230849    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:36.280992    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:36.280992    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:36.318994    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:36.318994    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:36.394008    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:36.394008    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:36.438916    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:36.438916    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:36.543952    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:36.543952    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:36.543952    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:36.612923    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:36.612923    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:36.666939    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:36.666939    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:36.724927    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:36.725935    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:39.284382    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:39.312395    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:39.347382    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:39.350374    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:39.392055    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:39.396494    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:39.426172    1468 logs.go:282] 0 containers: []
	W1213 10:09:39.426172    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:39.430171    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:39.458214    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:39.462063    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:39.495797    1468 logs.go:282] 0 containers: []
	W1213 10:09:39.495797    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:39.500768    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:39.531886    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:39.535669    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:39.570014    1468 logs.go:282] 0 containers: []
	W1213 10:09:39.571011    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:39.574020    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:39.609566    1468 logs.go:282] 0 containers: []
	W1213 10:09:39.609566    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:39.609566    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:39.609566    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:39.653079    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:39.653079    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:39.696238    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:39.696238    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:39.779910    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:39.779910    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:39.779910    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:39.826583    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:39.826617    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:39.856880    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:39.856880    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:39.914073    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:39.914073    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:39.981796    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:39.981796    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:40.035113    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:40.035113    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:42.591331    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:42.614402    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:42.646817    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:42.649805    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:42.684598    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:42.688586    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:42.721174    1468 logs.go:282] 0 containers: []
	W1213 10:09:42.721174    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:42.724188    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:42.756979    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:42.761339    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:42.794495    1468 logs.go:282] 0 containers: []
	W1213 10:09:42.794495    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:42.797498    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:42.829659    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:42.832648    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:42.872282    1468 logs.go:282] 0 containers: []
	W1213 10:09:42.872282    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:42.876291    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:42.907454    1468 logs.go:282] 0 containers: []
	W1213 10:09:42.907454    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:42.907454    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:42.907454    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:42.977465    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:42.977465    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:43.015502    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:43.015502    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:43.112329    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:43.112862    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:43.112900    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:43.168739    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:43.168739    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:43.208639    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:43.208639    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:43.259295    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:43.259295    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:43.305894    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:43.305894    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:43.335885    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:43.335885    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:45.899692    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:45.921701    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:45.957687    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:45.962701    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:46.000331    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:46.004324    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:46.035567    1468 logs.go:282] 0 containers: []
	W1213 10:09:46.035567    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:46.039025    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:46.073454    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:46.077867    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:46.111822    1468 logs.go:282] 0 containers: []
	W1213 10:09:46.111822    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:46.116145    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:46.153662    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:46.158830    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:46.196556    1468 logs.go:282] 0 containers: []
	W1213 10:09:46.196556    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:46.200565    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:46.229557    1468 logs.go:282] 0 containers: []
	W1213 10:09:46.229557    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:46.229557    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:46.229557    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:46.311568    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:46.311568    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:46.348439    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:46.348439    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:46.395668    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:46.395668    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:46.429853    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:46.429853    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:46.510057    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:46.510057    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:46.510057    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:46.555364    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:46.555364    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:46.596623    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:46.596623    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:46.638264    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:46.638264    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:49.197175    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:49.220163    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:49.249176    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:49.252167    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:49.290174    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:49.294168    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:49.325453    1468 logs.go:282] 0 containers: []
	W1213 10:09:49.325453    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:49.329461    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:49.358096    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:49.362085    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:49.392947    1468 logs.go:282] 0 containers: []
	W1213 10:09:49.392947    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:49.396736    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:49.428519    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:49.432489    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:49.472881    1468 logs.go:282] 0 containers: []
	W1213 10:09:49.472881    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:49.475885    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:49.509883    1468 logs.go:282] 0 containers: []
	W1213 10:09:49.509883    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:49.509883    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:49.509883    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:49.761475    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:49.761475    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:49.761475    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:49.813708    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:49.813708    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:49.856707    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:49.856707    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:49.927695    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:49.927695    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:49.982683    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:49.982683    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:50.035698    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:50.035698    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:50.071702    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:50.072697    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:50.130703    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:50.130703    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:52.679025    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:52.702021    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:52.739014    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:52.743024    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:52.772019    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:52.776015    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:52.807017    1468 logs.go:282] 0 containers: []
	W1213 10:09:52.807017    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:52.810020    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:52.842020    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:52.845023    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:52.876619    1468 logs.go:282] 0 containers: []
	W1213 10:09:52.876619    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:52.880610    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:52.917451    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:52.920938    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:52.958898    1468 logs.go:282] 0 containers: []
	W1213 10:09:52.958898    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:52.962904    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:52.995913    1468 logs.go:282] 0 containers: []
	W1213 10:09:52.995913    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:52.995913    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:52.995913    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:53.052906    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:53.052906    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:53.089916    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:53.089916    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:53.177869    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:53.177869    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:53.177869    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:53.217875    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:53.217875    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:53.260798    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:53.260798    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:53.330330    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:53.330330    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:53.379544    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:53.379544    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:53.422562    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:53.422562    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:55.962844    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:55.986845    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:56.020426    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:56.024509    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:56.062424    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:56.065818    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:56.097545    1468 logs.go:282] 0 containers: []
	W1213 10:09:56.097612    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:56.101807    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:56.138955    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:56.143984    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:56.196792    1468 logs.go:282] 0 containers: []
	W1213 10:09:56.196875    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:56.202147    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:56.254841    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:56.262863    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:56.320847    1468 logs.go:282] 0 containers: []
	W1213 10:09:56.320847    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:56.324855    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:56.366847    1468 logs.go:282] 0 containers: []
	W1213 10:09:56.366847    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:56.366847    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:56.366847    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:56.442849    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:56.442849    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:56.491861    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:09:56.491861    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:09:56.564842    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:56.564842    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:56.603853    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:56.603853    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:56.699568    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:56.699568    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:09:56.699568    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:09:56.762354    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:56.762354    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:56.816357    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:09:56.816357    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:09:56.855365    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:56.855365    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:59.430550    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:59.451739    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:09:59.487838    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:09:59.491420    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:09:59.530512    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:09:59.533673    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:09:59.564775    1468 logs.go:282] 0 containers: []
	W1213 10:09:59.564775    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:09:59.567768    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:09:59.602086    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:09:59.606083    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:09:59.635077    1468 logs.go:282] 0 containers: []
	W1213 10:09:59.636082    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:59.639078    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:09:59.674100    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:09:59.677084    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:09:59.708874    1468 logs.go:282] 0 containers: []
	W1213 10:09:59.708916    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:59.714733    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:09:59.770178    1468 logs.go:282] 0 containers: []
	W1213 10:09:59.770178    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:09:59.770178    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:09:59.770178    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:09:59.809167    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:09:59.809167    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:09:59.849038    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:09:59.849078    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:59.913817    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:59.913817    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:59.973819    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:59.973819    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:00.076150    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:00.076150    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:00.076150    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:00.121157    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:00.121157    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:00.164157    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:00.164157    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:00.193153    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:00.193153    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:02.736260    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:02.757399    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:02.789389    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:02.793032    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:02.825277    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:02.828759    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:02.856748    1468 logs.go:282] 0 containers: []
	W1213 10:10:02.856748    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:02.861523    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:02.893252    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:02.897278    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:02.926514    1468 logs.go:282] 0 containers: []
	W1213 10:10:02.926514    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:02.930759    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:02.965673    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:02.968816    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:02.997892    1468 logs.go:282] 0 containers: []
	W1213 10:10:02.997892    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:03.002531    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:03.033243    1468 logs.go:282] 0 containers: []
	W1213 10:10:03.033243    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:03.033338    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:03.033338    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:03.064510    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:03.065041    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:03.141479    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:03.141479    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:03.141479    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:03.187244    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:03.187285    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:03.235466    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:03.235466    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:03.304541    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:03.304541    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:03.367953    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:03.367953    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:03.408263    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:03.408263    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:03.460545    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:03.460545    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:06.007486    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:06.037785    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:06.074645    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:06.078704    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:06.110995    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:06.115685    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:06.144080    1468 logs.go:282] 0 containers: []
	W1213 10:10:06.144080    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:06.147874    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:06.178727    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:06.181215    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:06.210404    1468 logs.go:282] 0 containers: []
	W1213 10:10:06.210404    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:06.214715    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:06.246392    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:06.250028    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:06.281066    1468 logs.go:282] 0 containers: []
	W1213 10:10:06.281066    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:06.285067    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:06.318023    1468 logs.go:282] 0 containers: []
	W1213 10:10:06.318023    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:06.318023    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:06.318023    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:06.356747    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:06.356747    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:06.440556    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:06.440556    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:06.440556    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:06.485054    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:06.485054    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:06.553694    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:06.553694    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:06.611545    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:06.611545    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:06.658709    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:06.658739    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:06.699860    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:06.699920    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:06.729856    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:06.729856    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:09.285292    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:09.309705    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:09.344289    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:09.347518    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:09.378399    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:09.382159    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:09.413571    1468 logs.go:282] 0 containers: []
	W1213 10:10:09.413663    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:09.417114    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:09.452735    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:09.456574    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:09.494260    1468 logs.go:282] 0 containers: []
	W1213 10:10:09.494260    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:09.498546    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:09.529018    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:09.532840    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:09.564031    1468 logs.go:282] 0 containers: []
	W1213 10:10:09.564031    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:09.568336    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:09.598676    1468 logs.go:282] 0 containers: []
	W1213 10:10:09.598700    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:09.598700    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:09.598700    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:09.665137    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:09.665137    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:09.704496    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:09.704496    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:09.756601    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:09.756601    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:09.806616    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:09.806697    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:09.853808    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:09.853808    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:09.935082    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:09.935082    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:09.935082    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:09.982876    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:09.982876    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:10.136033    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:10.136033    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:12.763679    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:12.785674    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:12.823672    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:12.827695    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:12.861669    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:12.864672    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:12.903673    1468 logs.go:282] 0 containers: []
	W1213 10:10:12.903673    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:12.907675    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:12.944671    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:12.950692    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:12.989237    1468 logs.go:282] 0 containers: []
	W1213 10:10:12.989237    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:12.993679    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:13.036845    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:13.043367    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:13.078002    1468 logs.go:282] 0 containers: []
	W1213 10:10:13.078002    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:13.084191    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:13.116222    1468 logs.go:282] 0 containers: []
	W1213 10:10:13.116222    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:13.116222    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:13.116222    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:13.161687    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:13.161687    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:13.275529    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:13.276520    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:13.276520    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:13.325527    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:13.325527    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:13.376278    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:13.376278    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:13.407721    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:13.407721    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:13.460447    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:13.460447    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:13.546046    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:13.546046    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:13.588071    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:13.588071    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:16.142742    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:16.167263    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:16.202685    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:16.208049    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:16.264970    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:16.269556    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:16.308367    1468 logs.go:282] 0 containers: []
	W1213 10:10:16.308420    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:16.312450    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:16.346300    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:16.352361    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:16.390891    1468 logs.go:282] 0 containers: []
	W1213 10:10:16.390941    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:16.394755    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:16.441430    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:16.445621    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:16.479674    1468 logs.go:282] 0 containers: []
	W1213 10:10:16.479674    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:16.485951    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:16.527427    1468 logs.go:282] 0 containers: []
	W1213 10:10:16.527469    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:16.527512    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:16.527512    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:16.595035    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:16.595082    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:16.634601    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:16.634644    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:16.733571    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:16.733571    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:16.733571    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:16.781938    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:16.781938    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:16.830957    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:16.830957    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:16.882391    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:16.882391    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:16.919714    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:16.919714    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:16.973334    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:16.973334    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:19.560619    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:19.581659    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:19.616290    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:19.620258    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:19.651733    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:19.655621    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:19.688250    1468 logs.go:282] 0 containers: []
	W1213 10:10:19.688294    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:19.692861    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:19.723160    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:19.726948    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:19.762222    1468 logs.go:282] 0 containers: []
	W1213 10:10:19.762273    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:19.765497    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:19.805801    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:19.808796    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:19.837610    1468 logs.go:282] 0 containers: []
	W1213 10:10:19.837610    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:19.841334    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:19.879626    1468 logs.go:282] 0 containers: []
	W1213 10:10:19.879626    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:19.879626    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:19.879626    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:19.946207    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:19.946207    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:19.990458    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:19.990458    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:20.037076    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:20.037076    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:20.089611    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:20.089647    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:20.146741    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:20.146741    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:20.238480    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:20.239131    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:20.239180    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:20.292879    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:20.292879    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:20.343690    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:20.343690    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:22.888022    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:22.910944    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:22.948943    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:22.952479    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:22.999502    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:23.003479    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:23.031478    1468 logs.go:282] 0 containers: []
	W1213 10:10:23.031478    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:23.034478    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:23.071487    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:23.075483    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:23.107855    1468 logs.go:282] 0 containers: []
	W1213 10:10:23.107855    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:23.111466    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:23.141336    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:23.145345    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:23.183174    1468 logs.go:282] 0 containers: []
	W1213 10:10:23.183174    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:23.186174    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:23.217173    1468 logs.go:282] 0 containers: []
	W1213 10:10:23.217173    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:23.217173    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:23.217173    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:23.252167    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:23.252167    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:23.339178    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:23.339178    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:23.339178    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:23.393666    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:23.393666    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:23.431677    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:23.431677    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:23.488257    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:23.488257    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:23.527241    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:23.527241    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:23.573238    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:23.573238    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:23.603255    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:23.603255    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:26.169848    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:26.194224    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:26.223742    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:26.226733    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:26.259711    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:26.264240    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:26.293501    1468 logs.go:282] 0 containers: []
	W1213 10:10:26.293501    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:26.296890    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:26.329948    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:26.333639    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:26.367175    1468 logs.go:282] 0 containers: []
	W1213 10:10:26.367237    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:26.373099    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:26.409992    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:26.412989    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:26.443865    1468 logs.go:282] 0 containers: []
	W1213 10:10:26.443865    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:26.447997    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:26.487708    1468 logs.go:282] 0 containers: []
	W1213 10:10:26.487708    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:26.487708    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:26.487708    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:26.519318    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:26.519318    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:26.567915    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:26.567915    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:26.659779    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:26.659779    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:26.659779    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:26.708557    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:26.708557    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:26.774279    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:26.774279    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:26.826439    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:26.826439    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:26.869999    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:26.870054    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:26.942315    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:26.942315    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:29.489118    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:29.508113    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:29.538788    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:29.542450    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:29.576505    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:29.581247    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:29.610966    1468 logs.go:282] 0 containers: []
	W1213 10:10:29.610966    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:29.615199    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:29.647706    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:29.651869    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:29.688949    1468 logs.go:282] 0 containers: []
	W1213 10:10:29.688949    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:29.693377    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:29.727103    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:29.732180    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:29.772161    1468 logs.go:282] 0 containers: []
	W1213 10:10:29.772214    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:29.776527    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:29.809065    1468 logs.go:282] 0 containers: []
	W1213 10:10:29.809116    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:29.809174    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:29.809174    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:29.848890    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:29.849384    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:29.892966    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:29.892966    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:29.946404    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:29.946404    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:29.996411    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:29.996411    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:30.057714    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:30.057714    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:30.205998    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:30.205998    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:30.325813    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:30.325813    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:30.325813    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:30.390557    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:30.390557    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:32.946362    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:32.981207    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:33.016749    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:33.019749    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:33.055314    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:33.060322    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:33.099512    1468 logs.go:282] 0 containers: []
	W1213 10:10:33.099512    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:33.102508    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:33.133552    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:33.137248    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:33.179105    1468 logs.go:282] 0 containers: []
	W1213 10:10:33.179224    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:33.185619    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:33.218005    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:33.221524    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:33.254928    1468 logs.go:282] 0 containers: []
	W1213 10:10:33.255511    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:33.262101    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:33.297846    1468 logs.go:282] 0 containers: []
	W1213 10:10:33.297846    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:33.298390    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:33.298390    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.354692    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:33.354692    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:33.447488    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:33.447542    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:33.447639    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:33.502588    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:33.502588    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:33.543258    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:33.543323    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:33.606931    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:33.606931    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:33.641982    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:33.642051    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:33.719328    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:33.719328    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:33.760102    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:33.760102    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:36.336807    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:36.366111    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:36.409323    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:36.415237    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:36.457538    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:36.462217    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:36.494055    1468 logs.go:282] 0 containers: []
	W1213 10:10:36.494055    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:36.498035    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:36.528260    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:36.531263    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:36.563683    1468 logs.go:282] 0 containers: []
	W1213 10:10:36.563683    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:36.567880    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:36.597560    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:36.602488    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:36.631524    1468 logs.go:282] 0 containers: []
	W1213 10:10:36.631524    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:36.634593    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:36.670878    1468 logs.go:282] 0 containers: []
	W1213 10:10:36.670905    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:36.670905    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:36.670905    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:36.737666    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:36.737666    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:36.781137    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:36.781137    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:36.874309    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.875654    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:36.875654    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:36.922151    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:36.922151    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:36.966231    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:36.966231    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:36.999260    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:36.999801    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:37.041282    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:37.041282    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:37.094722    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:37.094722    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:39.651156    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:39.678414    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:39.713640    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:39.716635    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:39.749249    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:39.753259    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:39.783512    1468 logs.go:282] 0 containers: []
	W1213 10:10:39.783512    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:39.786845    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:39.822149    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:39.825796    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:39.858536    1468 logs.go:282] 0 containers: []
	W1213 10:10:39.858536    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:39.863125    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:39.894647    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:39.898486    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:39.930201    1468 logs.go:282] 0 containers: []
	W1213 10:10:39.930201    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:39.934698    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:39.967145    1468 logs.go:282] 0 containers: []
	W1213 10:10:39.967145    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:39.967145    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:39.967145    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:40.025939    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:40.026048    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:40.115054    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:40.115054    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:40.115054    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:40.166737    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:40.166737    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:40.224888    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:40.224888    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:40.274625    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:40.274625    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:40.339611    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:40.339611    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:40.383615    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:40.383615    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:40.425419    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:40.425419    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:42.964912    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:42.990545    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:43.026491    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:43.030537    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:43.058417    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:43.064704    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:43.096118    1468 logs.go:282] 0 containers: []
	W1213 10:10:43.096118    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:43.100213    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:43.131035    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:43.136148    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:43.170285    1468 logs.go:282] 0 containers: []
	W1213 10:10:43.170285    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:43.173708    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:43.208701    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:43.212701    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:43.259585    1468 logs.go:282] 0 containers: []
	W1213 10:10:43.259585    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:43.263623    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:43.292680    1468 logs.go:282] 0 containers: []
	W1213 10:10:43.292680    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:43.292680    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:43.292680    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:43.331425    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:43.331425    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:43.384882    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:43.384882    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:43.456561    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:43.456561    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:43.546117    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:43.546117    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:43.546117    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:43.589611    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:43.589611    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:43.638814    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:43.638814    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:43.675678    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:43.675678    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:43.714293    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:43.714293    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:46.275989    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:46.366843    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:46.405186    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:46.411275    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:46.446496    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:46.453455    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:46.486461    1468 logs.go:282] 0 containers: []
	W1213 10:10:46.486461    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:46.489446    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:46.529410    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:46.533912    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:46.579067    1468 logs.go:282] 0 containers: []
	W1213 10:10:46.579067    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:46.583251    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:46.619055    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:46.622873    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:46.666710    1468 logs.go:282] 0 containers: []
	W1213 10:10:46.666710    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:46.671206    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:46.705673    1468 logs.go:282] 0 containers: []
	W1213 10:10:46.705673    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:46.705673    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:46.705673    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:46.769797    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:46.769797    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:46.848212    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:46.848212    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:46.893593    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:46.894598    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:46.951765    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:46.951765    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:47.007761    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:47.007761    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:47.039758    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:47.040757    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:47.144788    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:47.144788    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:47.144788    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:47.208485    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:47.209025    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:49.761520    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:49.786529    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:49.824529    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:49.828523    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:49.864527    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:49.869516    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:49.906524    1468 logs.go:282] 0 containers: []
	W1213 10:10:49.906524    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:49.911515    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:49.957523    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:49.962546    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:50.003534    1468 logs.go:282] 0 containers: []
	W1213 10:10:50.003534    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:50.007522    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:50.083253    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:50.088908    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:50.131673    1468 logs.go:282] 0 containers: []
	W1213 10:10:50.131673    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:50.135651    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:50.175664    1468 logs.go:282] 0 containers: []
	W1213 10:10:50.175664    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:50.175664    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:50.175664    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:50.285675    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:50.285675    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:50.285675    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:50.349659    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:50.349659    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:50.408661    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:50.408661    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:50.458667    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:50.458667    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:50.513674    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:50.513674    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:50.562658    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:50.562658    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:50.595669    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:50.596670    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:50.680677    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:50.680677    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:53.238853    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:53.262203    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:53.295436    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:53.300689    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:53.335260    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:53.338257    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:53.369282    1468 logs.go:282] 0 containers: []
	W1213 10:10:53.369282    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:53.372266    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:53.403422    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:53.406426    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:53.435436    1468 logs.go:282] 0 containers: []
	W1213 10:10:53.435436    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:53.438420    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:53.471428    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:53.474422    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:53.504558    1468 logs.go:282] 0 containers: []
	W1213 10:10:53.504613    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:53.510794    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:53.545126    1468 logs.go:282] 0 containers: []
	W1213 10:10:53.545126    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:53.545126    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:53.545126    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:53.591658    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:53.591658    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:53.632260    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:53.632260    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:53.688708    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:53.688708    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:53.790647    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:53.790647    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:53.790647    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:53.834645    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:53.834645    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:53.864661    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:53.864661    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:53.936520    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:53.936520    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:53.977167    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:53.978173    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:56.527084    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:56.548435    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:56.583967    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:56.588002    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:56.620977    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:56.624489    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:56.651711    1468 logs.go:282] 0 containers: []
	W1213 10:10:56.651711    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:56.654704    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:56.686311    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:56.691296    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:56.727269    1468 logs.go:282] 0 containers: []
	W1213 10:10:56.727269    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:56.730271    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:10:56.765260    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:10:56.769263    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:10:56.797259    1468 logs.go:282] 0 containers: []
	W1213 10:10:56.797259    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:56.801260    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:10:56.833265    1468 logs.go:282] 0 containers: []
	W1213 10:10:56.833265    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:10:56.833265    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:56.833265    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:56.871726    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:56.871726    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:56.950102    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:56.950102    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:10:56.950102    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:10:56.998419    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:10:56.998419    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:10:57.036420    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:10:57.036420    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:10:57.073850    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:10:57.073850    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.121339    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:57.121411    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:57.191978    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:10:57.191978    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:10:57.242617    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:10:57.242617    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:10:59.789380    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:59.812910    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:10:59.846686    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:10:59.850114    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:10:59.882007    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:10:59.885566    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:10:59.914599    1468 logs.go:282] 0 containers: []
	W1213 10:10:59.914599    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:10:59.919323    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:10:59.947511    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:10:59.950799    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:10:59.982951    1468 logs.go:282] 0 containers: []
	W1213 10:10:59.983044    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:59.986502    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:00.019298    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:00.022671    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:00.053811    1468 logs.go:282] 0 containers: []
	W1213 10:11:00.053874    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:00.058392    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:00.089655    1468 logs.go:282] 0 containers: []
	W1213 10:11:00.089721    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:00.089759    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:00.089759    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:00.156924    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:00.156924    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:00.198232    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:00.198232    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:00.364192    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:00.364192    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:00.364192    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:00.424520    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:00.424520    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:00.483475    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:00.483475    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:00.548462    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:00.548462    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:00.599373    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:00.599373    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:00.643547    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:00.643547    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:03.179275    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:03.206027    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:03.240185    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:03.244245    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:03.274510    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:03.278012    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:03.310405    1468 logs.go:282] 0 containers: []
	W1213 10:11:03.310405    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:03.313864    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:03.344037    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:03.347980    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:03.386036    1468 logs.go:282] 0 containers: []
	W1213 10:11:03.386103    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:03.389465    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:03.423998    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:03.427002    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:03.456183    1468 logs.go:282] 0 containers: []
	W1213 10:11:03.456183    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:03.460573    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:03.498795    1468 logs.go:282] 0 containers: []
	W1213 10:11:03.498859    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:03.498915    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:03.498915    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.563810    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:03.563887    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:03.616549    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:03.616549    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:03.661857    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:03.661857    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:03.700118    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:03.700118    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:03.732338    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:03.732338    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:03.802079    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:03.802079    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:03.842247    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:03.842247    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:03.933127    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:03.933127    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:03.933127    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:06.483370    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:06.509788    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:06.541863    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:06.546069    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:06.579356    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:06.585368    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:06.615410    1468 logs.go:282] 0 containers: []
	W1213 10:11:06.615410    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:06.619271    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:06.654036    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:06.657328    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:06.686701    1468 logs.go:282] 0 containers: []
	W1213 10:11:06.686701    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:06.690476    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:06.721811    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:06.726400    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:06.759118    1468 logs.go:282] 0 containers: []
	W1213 10:11:06.759118    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:06.763321    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:06.795804    1468 logs.go:282] 0 containers: []
	W1213 10:11:06.795804    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:06.795804    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:06.795804    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.861796    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:06.861796    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:06.901536    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:06.901536    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:06.982466    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:06.982466    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:06.982466    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:07.029208    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:07.029729    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:07.062860    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:07.062860    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:07.124544    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:07.124667    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:07.171815    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:07.171815    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:07.223433    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:07.223433    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:09.782573    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:09.808390    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:09.844585    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:09.848228    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:09.883750    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:09.887268    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:09.917608    1468 logs.go:282] 0 containers: []
	W1213 10:11:09.917608    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:09.921477    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:09.954097    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:09.957320    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:09.988052    1468 logs.go:282] 0 containers: []
	W1213 10:11:09.988052    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:09.992925    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:10.029050    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:10.032614    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:10.060508    1468 logs.go:282] 0 containers: []
	W1213 10:11:10.060559    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:10.064072    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:10.096244    1468 logs.go:282] 0 containers: []
	W1213 10:11:10.096244    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:10.096244    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:10.096244    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:10.138793    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:10.138793    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:10.194615    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:10.194681    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:10.271247    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:10.272251    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:10.337554    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:10.337554    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:10.432814    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:10.432814    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:10.432814    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:10.478568    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:10.478568    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:10.519122    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:10.519122    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:10.564670    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:10.564670    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:13.101690    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:13.126846    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:13.166569    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:13.171886    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:13.213402    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:13.217386    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:13.260392    1468 logs.go:282] 0 containers: []
	W1213 10:11:13.260392    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:13.264395    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:13.298814    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:13.302760    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:13.335140    1468 logs.go:282] 0 containers: []
	W1213 10:11:13.335140    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:13.338614    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:13.371165    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:13.375167    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:13.408842    1468 logs.go:282] 0 containers: []
	W1213 10:11:13.408842    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:13.412847    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:13.446511    1468 logs.go:282] 0 containers: []
	W1213 10:11:13.446578    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:13.446578    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:13.446613    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:13.495405    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:13.495405    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:13.525406    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:13.525406    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:13.574413    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:13.574413    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:13.642403    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:13.642403    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:13.683409    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:13.683409    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:13.768259    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:13.768294    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:13.768333    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:13.822442    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:13.822442    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:13.868184    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:13.868184    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:16.414227    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:16.438022    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:16.473317    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:16.477673    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:16.508563    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:16.511563    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:16.544067    1468 logs.go:282] 0 containers: []
	W1213 10:11:16.544067    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:16.549082    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:16.580425    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:16.584015    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:16.616897    1468 logs.go:282] 0 containers: []
	W1213 10:11:16.616897    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:16.621448    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:16.657231    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:16.660289    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:16.697290    1468 logs.go:282] 0 containers: []
	W1213 10:11:16.697290    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:16.703782    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:16.739119    1468 logs.go:282] 0 containers: []
	W1213 10:11:16.739119    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:16.739119    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:16.739119    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:16.771521    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:16.771521    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:16.836137    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:16.836137    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:16.901621    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:16.902617    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:16.953338    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:16.953338    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:17.004331    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:17.004331    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:17.058334    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:17.058334    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:17.100326    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:17.100326    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:17.200973    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:17.200973    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:17.200973    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:19.741202    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:19.855601    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:19.891373    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:19.895070    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:19.931891    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:19.935846    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:19.972612    1468 logs.go:282] 0 containers: []
	W1213 10:11:19.972655    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:19.975929    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:20.014803    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:20.019017    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:20.053219    1468 logs.go:282] 0 containers: []
	W1213 10:11:20.053264    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:20.057938    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:20.091832    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:20.095853    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:20.134265    1468 logs.go:282] 0 containers: []
	W1213 10:11:20.134487    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:20.137694    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:20.179431    1468 logs.go:282] 0 containers: []
	W1213 10:11:20.180435    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:20.180435    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:20.180435    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:20.297439    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:20.297439    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:20.297439    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:20.360432    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:20.360432    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:20.418426    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:20.418426    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:20.464428    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:20.464428    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:20.508430    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:20.508430    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:20.574960    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:20.574960    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:20.622978    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:20.622978    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:20.708368    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:20.708368    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:23.253333    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:23.275662    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:23.311669    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:23.315655    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:23.344663    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:23.348655    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:23.380676    1468 logs.go:282] 0 containers: []
	W1213 10:11:23.380676    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:23.385288    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:23.428509    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:23.431852    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:23.462158    1468 logs.go:282] 0 containers: []
	W1213 10:11:23.462158    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:23.466160    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:23.498166    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:23.502171    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:23.532169    1468 logs.go:282] 0 containers: []
	W1213 10:11:23.532169    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:23.535159    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:23.563166    1468 logs.go:282] 0 containers: []
	W1213 10:11:23.563166    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:23.563166    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:23.563166    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:23.655171    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:23.655171    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:23.703161    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:23.703161    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:23.749089    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:23.749089    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:23.803370    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:23.803370    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:23.855304    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:23.855304    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:23.983216    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:23.983216    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:23.983216    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:24.065203    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:24.065203    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:24.107197    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:24.107197    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:26.642742    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:26.688806    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:26.720798    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:26.724077    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:26.784249    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:26.787422    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:26.818538    1468 logs.go:282] 0 containers: []
	W1213 10:11:26.818538    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:26.822551    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:26.853538    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:26.856532    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:26.887337    1468 logs.go:282] 0 containers: []
	W1213 10:11:26.887337    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:26.890818    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:26.927017    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:26.931283    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:26.965879    1468 logs.go:282] 0 containers: []
	W1213 10:11:26.965879    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:26.969372    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:27.005124    1468 logs.go:282] 0 containers: []
	W1213 10:11:27.005124    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:27.005124    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:27.005219    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:27.070143    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:27.070143    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:27.109000    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:27.109000    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:27.152569    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:27.153260    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:27.207829    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:27.207829    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:27.289803    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:27.289803    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:27.289848    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:27.342622    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:27.342662    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:27.386006    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:27.386083    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:27.430721    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:27.430721    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:29.967875    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:29.993127    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:30.027131    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:30.030616    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:30.065791    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:30.068784    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:30.100097    1468 logs.go:282] 0 containers: []
	W1213 10:11:30.100176    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:30.105899    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:30.140993    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:30.145007    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:30.179192    1468 logs.go:282] 0 containers: []
	W1213 10:11:30.179192    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:30.183187    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:30.221143    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:30.225461    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:30.256752    1468 logs.go:282] 0 containers: []
	W1213 10:11:30.256752    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:30.261054    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:30.292478    1468 logs.go:282] 0 containers: []
	W1213 10:11:30.292478    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:30.292478    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:30.292478    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:30.373499    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:30.373499    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:30.373499    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:30.425476    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:30.425476    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:30.470470    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:30.470470    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:30.515474    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:30.515474    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:30.557478    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:30.557478    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:30.594479    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:30.594479    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:30.657483    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:30.657483    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:30.732479    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:30.732479    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.275220    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:33.304385    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:33.343096    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:33.346878    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:33.378816    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:33.383471    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:33.413467    1468 logs.go:282] 0 containers: []
	W1213 10:11:33.413467    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:33.417483    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:33.447563    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:33.451015    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:33.480743    1468 logs.go:282] 0 containers: []
	W1213 10:11:33.480852    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:33.484595    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:33.515466    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:33.520531    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:33.550526    1468 logs.go:282] 0 containers: []
	W1213 10:11:33.550553    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:33.553802    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:33.584034    1468 logs.go:282] 0 containers: []
	W1213 10:11:33.584034    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:33.584034    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:33.584034    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:33.669119    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:33.669119    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:33.669119    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:33.715928    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:33.715928    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:33.773388    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:33.773388    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:33.842341    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:33.842341    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.878391    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:33.878391    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:33.929870    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:33.929870    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:33.973738    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:33.973738    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:34.004334    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:34.004866    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:36.576367    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:36.606920    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:36.644209    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:36.647156    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:36.680673    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:36.685448    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:36.721205    1468 logs.go:282] 0 containers: []
	W1213 10:11:36.721205    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:36.726596    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:36.768673    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:36.771898    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:36.803893    1468 logs.go:282] 0 containers: []
	W1213 10:11:36.803893    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:36.808008    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:36.846207    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:36.850089    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:36.879475    1468 logs.go:282] 0 containers: []
	W1213 10:11:36.879475    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:36.882576    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:36.914396    1468 logs.go:282] 0 containers: []
	W1213 10:11:36.914475    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:36.914475    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:36.914475    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:36.981251    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:36.981251    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:37.030765    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:37.030792    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:37.071568    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:37.071568    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:37.103907    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:37.103907    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:37.157451    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:37.157500    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:37.211355    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:37.211355    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:37.295893    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:37.297346    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:37.297346    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:37.346911    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:37.346911    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:39.916663    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:39.940268    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:39.982146    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:39.986017    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:40.018321    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:40.022929    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:40.055627    1468 logs.go:282] 0 containers: []
	W1213 10:11:40.055627    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:40.062126    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:40.094427    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:40.098659    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:40.129558    1468 logs.go:282] 0 containers: []
	W1213 10:11:40.129558    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:40.134612    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:40.165276    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:40.169120    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:40.204821    1468 logs.go:282] 0 containers: []
	W1213 10:11:40.204821    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:40.208555    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:40.241251    1468 logs.go:282] 0 containers: []
	W1213 10:11:40.241251    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:40.241251    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:40.241251    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:40.294672    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:40.294672    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:40.337174    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:40.337174    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:40.370276    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:40.370850    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:40.427983    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:40.427983    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:40.470692    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:40.470692    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:40.524193    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:40.524193    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:40.570805    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:40.570805    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:40.658135    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:40.658135    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:40.752246    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:43.259176    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:43.282011    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:43.316901    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:43.319903    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:43.354891    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:43.357896    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:43.388892    1468 logs.go:282] 0 containers: []
	W1213 10:11:43.388892    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:43.392900    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:43.424899    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:43.427906    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:43.461901    1468 logs.go:282] 0 containers: []
	W1213 10:11:43.461901    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:43.465893    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:43.500719    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:43.503724    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:43.538306    1468 logs.go:282] 0 containers: []
	W1213 10:11:43.538306    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:43.541309    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:43.572317    1468 logs.go:282] 0 containers: []
	W1213 10:11:43.572317    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:43.572317    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:43.572317    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:43.636310    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:43.636310    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:43.687668    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:43.687668    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:43.776747    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:43.776747    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:43.825726    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:43.825726    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:43.872731    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:43.872731    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:43.922729    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:43.922729    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:43.952735    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:43.952735    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:44.018153    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:44.018690    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:44.131127    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:46.635372    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:46.668220    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:46.707938    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:46.711328    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:46.744791    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:46.747786    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:46.784047    1468 logs.go:282] 0 containers: []
	W1213 10:11:46.784047    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:46.787659    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:46.821244    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:46.825583    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:46.859068    1468 logs.go:282] 0 containers: []
	W1213 10:11:46.859104    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:46.862751    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:46.896795    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:46.901582    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:46.929848    1468 logs.go:282] 0 containers: []
	W1213 10:11:46.929848    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:46.933877    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:46.963633    1468 logs.go:282] 0 containers: []
	W1213 10:11:46.963633    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:46.963633    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:46.963633    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:47.032095    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:47.032095    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:47.069568    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:47.069568    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:47.149011    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:47.149011    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:47.149011    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:47.206085    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:47.206085    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:47.267988    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:47.267988    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:47.319438    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:47.319438    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:47.355078    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:47.355078    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:47.388226    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:47.388226    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:49.948845    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:49.972879    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:50.010125    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:50.014053    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:50.050677    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:50.054712    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:50.090007    1468 logs.go:282] 0 containers: []
	W1213 10:11:50.090007    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:50.096530    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:50.138931    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:50.142917    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:50.175922    1468 logs.go:282] 0 containers: []
	W1213 10:11:50.175922    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:50.179926    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:50.215092    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:50.219091    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:50.256901    1468 logs.go:282] 0 containers: []
	W1213 10:11:50.256942    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:50.262261    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:50.303430    1468 logs.go:282] 0 containers: []
	W1213 10:11:50.303430    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:50.303975    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:50.303975    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:50.367170    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:50.367289    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:50.417547    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:50.417630    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:50.470275    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:50.470275    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:50.517031    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:50.518031    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:50.555773    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:50.555827    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:50.625450    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:50.625450    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:50.723677    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:50.723677    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:50.723677    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:50.772680    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:50.772680    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:53.307929    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:53.329912    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:53.367786    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:53.371606    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:53.404137    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:53.408198    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:53.436598    1468 logs.go:282] 0 containers: []
	W1213 10:11:53.436637    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:53.440415    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:53.478956    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:53.483795    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:53.517090    1468 logs.go:282] 0 containers: []
	W1213 10:11:53.517186    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:53.522416    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:53.559235    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:53.562908    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:53.589408    1468 logs.go:282] 0 containers: []
	W1213 10:11:53.589408    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:53.595947    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:53.630429    1468 logs.go:282] 0 containers: []
	W1213 10:11:53.630429    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:53.630429    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:53.630429    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:53.681803    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:53.681803    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:53.732414    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:53.732414    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:53.782026    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:53.782075    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:53.850682    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:53.850682    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:53.929413    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:53.929413    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:53.929413    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:53.978642    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:53.978642    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:54.022314    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:54.022314    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:54.082414    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:54.082492    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:56.624199    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:56.650392    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:11:56.696179    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:11:56.699182    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:11:56.728430    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:11:56.733353    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:11:56.767320    1468 logs.go:282] 0 containers: []
	W1213 10:11:56.767320    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:11:56.771696    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:11:56.809579    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:11:56.814882    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:11:56.854380    1468 logs.go:282] 0 containers: []
	W1213 10:11:56.854380    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:56.858177    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:11:56.896292    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:11:56.898780    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:11:56.934138    1468 logs.go:282] 0 containers: []
	W1213 10:11:56.934194    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:56.937489    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:11:56.989752    1468 logs.go:282] 0 containers: []
	W1213 10:11:56.989752    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:11:56.989752    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:11:56.989752    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:11:57.034737    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:11:57.034737    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:11:57.104358    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:11:57.104358    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:11:57.139360    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:11:57.139360    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:57.205364    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:57.205364    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:57.252385    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:57.252385    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:57.347782    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:57.347830    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:11:57.347873    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:11:57.390131    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:11:57.390131    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:11:57.430668    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:57.430668    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:00.015500    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:00.037707    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:12:00.074243    1468 logs.go:282] 1 containers: [4a078925b3b8]
	I1213 10:12:00.078347    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:12:00.107954    1468 logs.go:282] 1 containers: [234db481cf83]
	I1213 10:12:00.111089    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:12:00.142542    1468 logs.go:282] 0 containers: []
	W1213 10:12:00.142542    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:12:00.148370    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:12:00.183642    1468 logs.go:282] 1 containers: [ee2fab724cef]
	I1213 10:12:00.187039    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:12:00.219027    1468 logs.go:282] 0 containers: []
	W1213 10:12:00.219080    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:00.222541    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:12:00.259557    1468 logs.go:282] 1 containers: [871e19efc5fa]
	I1213 10:12:00.263887    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:12:00.294062    1468 logs.go:282] 0 containers: []
	W1213 10:12:00.294062    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:00.298948    1468 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1213 10:12:00.330895    1468 logs.go:282] 0 containers: []
	W1213 10:12:00.330895    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:12:00.330895    1468 logs.go:123] Gathering logs for etcd [234db481cf83] ...
	I1213 10:12:00.330895    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 234db481cf83"
	I1213 10:12:00.374161    1468 logs.go:123] Gathering logs for kube-scheduler [ee2fab724cef] ...
	I1213 10:12:00.374216    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee2fab724cef"
	I1213 10:12:00.419936    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:00.419936    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:00.490424    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:00.490424    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:00.578267    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:00.578330    1468 logs.go:123] Gathering logs for kube-apiserver [4a078925b3b8] ...
	I1213 10:12:00.578330    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a078925b3b8"
	I1213 10:12:00.624741    1468 logs.go:123] Gathering logs for kube-controller-manager [871e19efc5fa] ...
	I1213 10:12:00.624741    1468 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 871e19efc5fa"
	I1213 10:12:00.664986    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:12:00.664986    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:12:00.700912    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:12:00.700912    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:00.767134    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:00.767653    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:03.307976    1468 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:03.358145    1468 kubeadm.go:602] duration metric: took 4m2.7726576s to restartPrimaryControlPlane
	W1213 10:12:03.359143    1468 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:12:03.365874    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 10:12:04.165318    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:12:04.192790    1468 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:12:04.207172    1468 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:12:04.212367    1468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:12:04.228110    1468 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:12:04.228110    1468 kubeadm.go:158] found existing configuration files:
	
	I1213 10:12:04.232912    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:12:04.247236    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:12:04.251227    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:12:04.268227    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:12:04.283238    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:12:04.287233    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:12:04.303228    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:12:04.316238    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:12:04.320232    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:12:04.337227    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:12:04.353231    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:12:04.359245    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:12:04.378233    1468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:12:04.510158    1468 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:12:04.590025    1468 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:12:04.696839    1468 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:16:05.448989    1468 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:16:05.449072    1468 kubeadm.go:319] 
	I1213 10:16:05.449251    1468 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:16:05.452194    1468 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:16:05.452351    1468 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:16:05.452667    1468 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:16:05.452885    1468 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:16:05.453047    1468 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:16:05.453275    1468 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:16:05.453592    1468 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:16:05.453822    1468 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:16:05.454163    1468 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:16:05.454278    1468 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:16:05.454473    1468 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:16:05.454736    1468 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:16:05.454862    1468 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:16:05.455082    1468 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:16:05.455424    1468 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:16:05.455635    1468 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:16:05.455865    1468 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:16:05.456041    1468 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:16:05.456297    1468 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:16:05.456448    1468 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:16:05.456616    1468 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:16:05.456763    1468 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:16:05.456991    1468 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:16:05.457207    1468 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:16:05.457524    1468 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:16:05.457688    1468 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:16:05.457732    1468 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:16:05.457732    1468 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:16:05.457732    1468 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:16:05.457732    1468 kubeadm.go:319] OS: Linux
	I1213 10:16:05.458390    1468 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:16:05.458504    1468 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:16:05.459319    1468 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:16:05.459599    1468 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:16:05.459599    1468 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:16:05.459599    1468 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:16:05.460175    1468 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:16:05.462685    1468 out.go:252]   - Generating certificates and keys ...
	I1213 10:16:05.462815    1468 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:16:05.462996    1468 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:16:05.462996    1468 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:16:05.462996    1468 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:16:05.462996    1468 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:16:05.462996    1468 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:16:05.464011    1468 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:16:05.464011    1468 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:16:05.464011    1468 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:16:05.464011    1468 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:16:05.464011    1468 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:16:05.464011    1468 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:16:05.465007    1468 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:16:05.465007    1468 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:16:05.465007    1468 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:16:05.465007    1468 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:16:05.465007    1468 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:16:05.465007    1468 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:16:05.466003    1468 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:16:05.468468    1468 out.go:252]   - Booting up control plane ...
	I1213 10:16:05.468468    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:16:05.469481    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:16:05.469576    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:16:05.469576    1468 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:16:05.470245    1468 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:16:05.470394    1468 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:16:05.470508    1468 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:16:05.470970    1468 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:16:05.470970    1468 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:16:05.471636    1468 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:16:05.471636    1468 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000210617s
	I1213 10:16:05.471636    1468 kubeadm.go:319] 
	I1213 10:16:05.471636    1468 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:16:05.472274    1468 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:16:05.472274    1468 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:16:05.472274    1468 kubeadm.go:319] 
	I1213 10:16:05.472810    1468 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:16:05.472927    1468 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:16:05.472974    1468 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:16:05.472974    1468 kubeadm.go:319] 
	W1213 10:16:05.472974    1468 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000210617s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000210617s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:16:05.478290    1468 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 10:16:05.956405    1468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:16:05.978034    1468 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:16:05.983022    1468 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:16:05.996022    1468 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:16:05.996022    1468 kubeadm.go:158] found existing configuration files:
	
	I1213 10:16:06.000710    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:16:06.015146    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:16:06.019587    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:16:06.037932    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:16:06.054790    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:16:06.058801    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:16:06.075804    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:16:06.089791    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:16:06.093797    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:16:06.110795    1468 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:16:06.123792    1468 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:16:06.127812    1468 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:16:06.149649    1468 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:16:06.286032    1468 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:16:06.380952    1468 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:16:06.482425    1468 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:20:07.525057    1468 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:20:07.525057    1468 kubeadm.go:319] 
	I1213 10:20:07.525057    1468 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:20:07.529061    1468 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:20:07.529061    1468 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:20:07.529810    1468 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:20:07.530222    1468 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:20:07.530411    1468 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:20:07.530537    1468 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:20:07.530759    1468 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:20:07.530943    1468 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:20:07.531206    1468 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:20:07.531361    1468 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:20:07.531361    1468 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:20:07.531517    1468 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:20:07.531714    1468 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:20:07.531893    1468 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:20:07.532181    1468 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:20:07.532459    1468 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:20:07.532684    1468 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:20:07.532831    1468 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:20:07.533023    1468 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:20:07.533023    1468 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:20:07.533023    1468 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:20:07.533023    1468 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:20:07.533023    1468 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:20:07.533553    1468 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:20:07.533629    1468 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:20:07.533629    1468 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:20:07.533629    1468 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:20:07.533629    1468 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:20:07.533629    1468 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:20:07.534231    1468 kubeadm.go:319] OS: Linux
	I1213 10:20:07.534297    1468 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:20:07.534297    1468 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:20:07.534297    1468 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:20:07.534297    1468 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:20:07.534297    1468 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:20:07.534880    1468 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:20:07.534880    1468 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:20:07.534880    1468 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:20:07.534880    1468 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:20:07.534880    1468 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:20:07.535433    1468 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:20:07.535433    1468 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:20:07.535433    1468 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:20:07.539335    1468 out.go:252]   - Generating certificates and keys ...
	I1213 10:20:07.539335    1468 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:20:07.539335    1468 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:20:07.539335    1468 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:20:07.539931    1468 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:20:07.539931    1468 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:20:07.539931    1468 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:20:07.539931    1468 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:20:07.539931    1468 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:20:07.540483    1468 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:20:07.540483    1468 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:20:07.540483    1468 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:20:07.540483    1468 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:20:07.540483    1468 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:20:07.541062    1468 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:20:07.541062    1468 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:20:07.541062    1468 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:20:07.541062    1468 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:20:07.541620    1468 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:20:07.541620    1468 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:20:07.543656    1468 out.go:252]   - Booting up control plane ...
	I1213 10:20:07.543656    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:20:07.543656    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:20:07.543656    1468 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:20:07.543656    1468 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:20:07.544651    1468 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:20:07.544651    1468 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:20:07.544651    1468 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:20:07.545255    1468 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:20:07.545345    1468 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:20:07.545345    1468 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:20:07.545345    1468 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000668077s
	I1213 10:20:07.545345    1468 kubeadm.go:319] 
	I1213 10:20:07.545345    1468 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:20:07.545345    1468 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:20:07.546301    1468 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:20:07.546301    1468 kubeadm.go:319] 
	I1213 10:20:07.546301    1468 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:20:07.546301    1468 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:20:07.546301    1468 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:20:07.546301    1468 kubeadm.go:319] 
	I1213 10:20:07.546301    1468 kubeadm.go:403] duration metric: took 12m7.0158129s to StartCluster
	I1213 10:20:07.546301    1468 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:20:07.550341    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:20:07.623548    1468 cri.go:89] found id: ""
	I1213 10:20:07.623606    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.623606    1468 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:20:07.623661    1468 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:20:07.627494    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:20:07.686283    1468 cri.go:89] found id: ""
	I1213 10:20:07.686283    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.686283    1468 logs.go:284] No container was found matching "etcd"
	I1213 10:20:07.686283    1468 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:20:07.692444    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:20:07.742531    1468 cri.go:89] found id: ""
	I1213 10:20:07.742531    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.742531    1468 logs.go:284] No container was found matching "coredns"
	I1213 10:20:07.742531    1468 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:20:07.746548    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:20:07.793541    1468 cri.go:89] found id: ""
	I1213 10:20:07.793541    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.793541    1468 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:20:07.793541    1468 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:20:07.797532    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:20:07.843537    1468 cri.go:89] found id: ""
	I1213 10:20:07.843537    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.843537    1468 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:20:07.843537    1468 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:20:07.847533    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:20:07.894408    1468 cri.go:89] found id: ""
	I1213 10:20:07.894408    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.894408    1468 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:20:07.894408    1468 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:20:07.898626    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:20:07.946155    1468 cri.go:89] found id: ""
	I1213 10:20:07.946155    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.946155    1468 logs.go:284] No container was found matching "kindnet"
	I1213 10:20:07.946155    1468 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 10:20:07.951153    1468 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 10:20:07.999971    1468 cri.go:89] found id: ""
	I1213 10:20:07.999971    1468 logs.go:282] 0 containers: []
	W1213 10:20:07.999971    1468 logs.go:284] No container was found matching "storage-provisioner"
	I1213 10:20:07.999971    1468 logs.go:123] Gathering logs for kubelet ...
	I1213 10:20:07.999971    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:20:08.075420    1468 logs.go:123] Gathering logs for dmesg ...
	I1213 10:20:08.075420    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:20:08.116429    1468 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:20:08.116429    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:20:08.221328    1468 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:20:08.221328    1468 logs.go:123] Gathering logs for Docker ...
	I1213 10:20:08.221328    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:20:08.251327    1468 logs.go:123] Gathering logs for container status ...
	I1213 10:20:08.251327    1468 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:20:08.303301    1468 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000668077s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:20:08.303386    1468 out.go:285] * 
	* 
	W1213 10:20:08.303554    1468 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000668077s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000668077s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:20:08.303724    1468 out.go:285] * 
	* 
	W1213 10:20:08.305844    1468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:20:08.310746    1468 out.go:203] 
	W1213 10:20:08.315035    1468 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000668077s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000668077s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:20:08.315035    1468 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:20:08.315035    1468 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:20:08.318486    1468 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-481200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-481200 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-481200 version --output=json: exit status 1 (10.1506171s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.3",
	    "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-09T15:06:39Z",
	    "goVersion": "go1.24.11",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 10:20:19.5792396 +0000 UTC m=+6661.478548401
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-481200
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-481200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60",
	        "Created": "2025-12-13T10:06:42.593930332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:07:26.444463135Z",
	            "FinishedAt": "2025-12-13T10:07:24.202005259Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60/hosts",
	        "LogPath": "/var/lib/docker/containers/3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60/3f0e6a1cee72fa5ade85cb5aecf8f7266c5ea9ed0f7d6b24ba92da7cf8b3ab60-json.log",
	        "Name": "/kubernetes-upgrade-481200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-481200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-481200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17a03d170bcdcb0bb1418813bf68261f70a53dbe5511f62240918094aa11b43e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17a03d170bcdcb0bb1418813bf68261f70a53dbe5511f62240918094aa11b43e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17a03d170bcdcb0bb1418813bf68261f70a53dbe5511f62240918094aa11b43e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17a03d170bcdcb0bb1418813bf68261f70a53dbe5511f62240918094aa11b43e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-481200",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-481200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-481200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-481200",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-481200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "deffd51b7dcd15bac36bb255df13abf77f0720efcca419bf23bb7419860de436",
	            "SandboxKey": "/var/run/docker/netns/deffd51b7dcd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52499"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-481200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6752b24b98291fd4efdece2bb121ef30ca8147de3a116519b32d8a1cb597d6ea",
	                    "EndpointID": "d2c46696ba3ed427fb1e493f2be6b5d400c1bb70a02a5cbbf650837cb1be4361",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-481200",
	                        "3f0e6a1cee72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-481200 -n kubernetes-upgrade-481200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-481200 -n kubernetes-upgrade-481200: exit status 2 (580.6718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-481200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-481200 logs -n 25: (1.086426s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-416400 sudo systemctl cat kubelet --no-pager                                                                                 │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ ssh     │ -p kindnet-416400 sudo systemctl status docker --all --full --no-pager                                                                  │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat docker --no-pager                                                                                  │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/docker/daemon.json                                                                                      │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo docker system info                                                                                               │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status cri-docker --all --full --no-pager                                                              │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat cri-docker --no-pager                                                                              │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cri-dockerd --version                                                                                            │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status containerd --all --full --no-pager                                                              │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat containerd --no-pager                                                                              │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /lib/systemd/system/containerd.service                                                                       │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/containerd/config.toml                                                                                  │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo containerd config dump                                                                                           │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status crio --all --full --no-pager                                                                    │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat crio --no-pager                                                                                    │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo crio config                                                                                                      │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ delete  │ -p kindnet-416400                                                                                                                       │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ start   │ -p calico-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker                            │ calico-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:20:15
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:20:15.287120   12636 out.go:360] Setting OutFile to fd 1260 ...
	I1213 10:20:15.331495   12636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:15.331495   12636 out.go:374] Setting ErrFile to fd 628...
	I1213 10:20:15.331495   12636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:15.348981   12636 out.go:368] Setting JSON to false
	I1213 10:20:15.351292   12636 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7022,"bootTime":1765614192,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:20:15.351292   12636 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:20:15.356530   12636 out.go:179] * [calico-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:20:15.360745   12636 notify.go:221] Checking for updates...
	I1213 10:20:15.362455   12636 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:20:15.364448   12636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:20:15.366449   12636 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:20:15.369333   12636 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:20:15.371299   12636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:20:11.583434    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:11.714554    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:11.714554    8468 retry.go:31] will retry after 7.485124041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:13.455547    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:13.541922    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:13.542020    8468 retry.go:31] will retry after 8.135198811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:14.992257    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:15.096205    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:15.096205    8468 retry.go:31] will retry after 7.728239711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:15.374390   12636 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:20:15.374390   12636 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:20:15.375015   12636 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:20:15.375015   12636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:20:15.495695   12636 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:20:15.498696   12636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:15.733486   12636 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:20:15.71535793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:15.738486   12636 out.go:179] * Using the docker driver based on user configuration
	I1213 10:20:15.740489   12636 start.go:309] selected driver: docker
	I1213 10:20:15.740489   12636 start.go:927] validating driver "docker" against <nil>
	I1213 10:20:15.740489   12636 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:20:15.781802   12636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:16.030857   12636 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:20:15.99890589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:16.031134   12636 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:20:16.032052   12636 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:20:16.036910   12636 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:20:16.040136   12636 cni.go:84] Creating CNI manager for "calico"
	I1213 10:20:16.040172   12636 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1213 10:20:16.040342   12636 start.go:353] cluster config:
	{Name:calico-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:16.044068   12636 out.go:179] * Starting "calico-416400" primary control-plane node in "calico-416400" cluster
	I1213 10:20:16.049890   12636 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:20:16.056247   12636 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:20:16.059930   12636 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:16.059930   12636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:20:16.059930   12636 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:20:16.059930   12636 cache.go:65] Caching tarball of preloaded images
	I1213 10:20:16.060647   12636 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:20:16.060647   12636 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:20:16.061204   12636 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\config.json ...
	I1213 10:20:16.061204   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\config.json: {Name:mkcf800a4ae64f3200d32c354e86eeed9aafa8b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:16.137345   12636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:20:16.137345   12636 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:20:16.137345   12636 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:20:16.137345   12636 start.go:360] acquireMachinesLock for calico-416400: {Name:mkc0aedfa981a5bbfa54acd6dac00d6300cdd08b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:20:16.137345   12636 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-416400"
	I1213 10:20:16.138552   12636 start.go:93] Provisioning new machine with config: &{Name:calico-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-416400 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:20:16.138787   12636 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:20:16.141876   12636 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:20:16.142399   12636 start.go:159] libmachine.API.Create for "calico-416400" (driver="docker")
	I1213 10:20:16.142481   12636 client.go:173] LocalClient.Create starting
	I1213 10:20:16.142481   12636 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:20:16.143000   12636 main.go:143] libmachine: Decoding PEM data...
	I1213 10:20:16.143097   12636 main.go:143] libmachine: Parsing certificate...
	I1213 10:20:16.143293   12636 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:20:16.143293   12636 main.go:143] libmachine: Decoding PEM data...
	I1213 10:20:16.143293   12636 main.go:143] libmachine: Parsing certificate...
	I1213 10:20:16.148080   12636 cli_runner.go:164] Run: docker network inspect calico-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:20:16.200145   12636 cli_runner.go:211] docker network inspect calico-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:20:16.204138   12636 network_create.go:284] running [docker network inspect calico-416400] to gather additional debugging logs...
	I1213 10:20:16.204138   12636 cli_runner.go:164] Run: docker network inspect calico-416400
	W1213 10:20:16.253137   12636 cli_runner.go:211] docker network inspect calico-416400 returned with exit code 1
	I1213 10:20:16.253137   12636 network_create.go:287] error running [docker network inspect calico-416400]: docker network inspect calico-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-416400 not found
	I1213 10:20:16.253137   12636 network_create.go:289] output of [docker network inspect calico-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-416400 not found
	
	** /stderr **
	I1213 10:20:16.257137   12636 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:20:16.330077   12636 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:16.346017   12636 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:16.358349   12636 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018251d0}
	I1213 10:20:16.358349   12636 network_create.go:124] attempt to create docker network calico-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:20:16.363859   12636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400
	W1213 10:20:16.424936   12636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400 returned with exit code 1
	W1213 10:20:16.424936   12636 network_create.go:149] failed to create docker network calico-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:20:16.424936   12636 network_create.go:116] failed to create docker network calico-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:20:16.454193   12636 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:16.472188   12636 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017afb60}
	I1213 10:20:16.472188   12636 network_create.go:124] attempt to create docker network calico-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:20:16.477219   12636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400
	W1213 10:20:16.525195   12636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400 returned with exit code 1
	W1213 10:20:16.525195   12636 network_create.go:149] failed to create docker network calico-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:20:16.525195   12636 network_create.go:116] failed to create docker network calico-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:20:16.549487   12636 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:16.563730   12636 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016e2a80}
	I1213 10:20:16.564288   12636 network_create.go:124] attempt to create docker network calico-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:20:16.568567   12636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400
	W1213 10:20:16.619251   12636 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400 returned with exit code 1
	W1213 10:20:16.619777   12636 network_create.go:149] failed to create docker network calico-416400 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:20:16.619777   12636 network_create.go:116] failed to create docker network calico-416400 192.168.85.0/24, will retry: subnet is taken
	I1213 10:20:16.642527   12636 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:16.656148   12636 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00186baa0}
	I1213 10:20:16.656148   12636 network_create.go:124] attempt to create docker network calico-416400 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 10:20:16.659222   12636 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-416400 calico-416400
	I1213 10:20:16.801318   12636 network_create.go:108] docker network calico-416400 192.168.94.0/24 created
	I1213 10:20:16.801415   12636 kic.go:121] calculated static IP "192.168.94.2" for the "calico-416400" container
	I1213 10:20:16.809248   12636 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:20:16.870174   12636 cli_runner.go:164] Run: docker volume create calico-416400 --label name.minikube.sigs.k8s.io=calico-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:20:16.939271   12636 oci.go:103] Successfully created a docker volume calico-416400
	I1213 10:20:16.942761   12636 cli_runner.go:164] Run: docker run --rm --name calico-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-416400 --entrypoint /usr/bin/test -v calico-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:20:18.366525   12636 cli_runner.go:217] Completed: docker run --rm --name calico-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-416400 --entrypoint /usr/bin/test -v calico-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.423744s)
	I1213 10:20:18.366525   12636 oci.go:107] Successfully prepared a docker volume calico-416400
	I1213 10:20:18.366525   12636 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:18.366525   12636 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:20:18.371176   12636 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:20:19.204769    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:19.294020    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:19.294020    8468 retry.go:31] will retry after 11.049523391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:20.240166    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	
	
	==> Docker <==
	Dec 13 10:07:57 kubernetes-upgrade-481200 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.011276283Z" level=info msg="Starting up"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.037243291Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.037422608Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.037442409Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.054121656Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.072380249Z" level=info msg="Loading containers: start."
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.078415709Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.147766340Z" level=info msg="Restoring containers: start."
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.282232509Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.332263249Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.702156750Z" level=info msg="Loading containers: done."
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846508436Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846671951Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846684753Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846725056Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846732357Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846756659Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.846806664Z" level=info msg="Initializing buildkit"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.976669906Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.986031575Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.986171187Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.986195390Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:07:58 kubernetes-upgrade-481200 dockerd[1460]: time="2025-12-13T10:07:58.986241794Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:07:58 kubernetes-upgrade-481200 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.822306] CPU: 8 PID: 417127 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f309eb02b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f309eb02af6.
	[  +0.000001] RSP: 002b:00007ffeada4c8d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.951842] CPU: 14 PID: 417302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fecfe6b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fecfe6b9af6.
	[  +0.000001] RSP: 002b:00007ffdf6aeb3d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 10:20:21 up  1:56,  0 user,  load average: 3.46, 3.21, 3.27
	Linux kubernetes-upgrade-481200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:20:17 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:20:18 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 334.
	Dec 13 10:20:18 kubernetes-upgrade-481200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:18 kubernetes-upgrade-481200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:18 kubernetes-upgrade-481200 kubelet[25705]: E1213 10:20:18.700884   25705 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:20:18 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:20:18 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:20:19 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 335.
	Dec 13 10:20:19 kubernetes-upgrade-481200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:19 kubernetes-upgrade-481200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:19 kubernetes-upgrade-481200 kubelet[25718]: E1213 10:20:19.443080   25718 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:20:19 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:20:19 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 336.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:20 kubernetes-upgrade-481200 kubelet[25743]: E1213 10:20:20.211700   25743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:20:20 kubernetes-upgrade-481200 kubelet[25814]: E1213 10:20:20.948390   25814 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:20:20 kubernetes-upgrade-481200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-481200 -n kubernetes-upgrade-481200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-481200 -n kubernetes-upgrade-481200: exit status 2 (582.1396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-481200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-481200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-481200
E1213 10:20:22.024722    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-481200: (8.6161874s)
--- FAIL: TestKubernetesUpgrade (846.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (528.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m45.4357497s)

                                                
                                                
-- stdout --
	* [no-preload-803600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-803600" primary control-plane node in "no-preload-803600" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:09:19.546798    2828 out.go:360] Setting OutFile to fd 1596 ...
	I1213 10:09:19.589594    2828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:09:19.589594    2828 out.go:374] Setting ErrFile to fd 1844...
	I1213 10:09:19.589594    2828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:09:19.605108    2828 out.go:368] Setting JSON to false
	I1213 10:09:19.606907    2828 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6366,"bootTime":1765614192,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:09:19.606907    2828 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:09:19.612745    2828 out.go:179] * [no-preload-803600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:09:19.617410    2828 notify.go:221] Checking for updates...
	I1213 10:09:19.619620    2828 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:09:19.625186    2828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:09:19.627391    2828 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:09:19.628939    2828 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:09:19.631179    2828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:09:19.634035    2828 config.go:182] Loaded profile config "cert-expiration-980800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:09:19.634035    2828 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:09:19.634843    2828 config.go:182] Loaded profile config "old-k8s-version-987400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1213 10:09:19.634903    2828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:09:19.746427    2828 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:09:19.749793    2828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:09:19.987858    2828 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:09:19.967142396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:09:19.990805    2828 out.go:179] * Using the docker driver based on user configuration
	I1213 10:09:19.994478    2828 start.go:309] selected driver: docker
	I1213 10:09:19.994535    2828 start.go:927] validating driver "docker" against <nil>
	I1213 10:09:19.994551    2828 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:09:20.035299    2828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:09:20.290917    2828 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:09:20.269343779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:09:20.291862    2828 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:09:20.291862    2828 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:09:20.296847    2828 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:09:20.299848    2828 cni.go:84] Creating CNI manager for ""
	I1213 10:09:20.299848    2828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:09:20.299848    2828 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 10:09:20.299848    2828 start.go:353] cluster config:
	{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:09:20.303861    2828 out.go:179] * Starting "no-preload-803600" primary control-plane node in "no-preload-803600" cluster
	I1213 10:09:20.306851    2828 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:09:20.308854    2828 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:09:20.311849    2828 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:09:20.311849    2828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:09:20.311849    2828 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:09:20.311849    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 10:09:20.311849    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:09:20.311849    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:09:20.311849    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:09:20.311849    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:20.311849    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json: {Name:mk5d6b4865fb8a927afdd80a51c0eaf9c39e7803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:20.312861    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1213 10:09:20.312861    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:09:20.312861    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1213 10:09:20.489285    2828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:09:20.489285    2828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:09:20.489285    2828 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:09:20.489285    2828 start.go:360] acquireMachinesLock for no-preload-803600: {Name:mkcf862c61e4405506d111940ccf3455664885da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:20.489285    2828 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-803600"
	I1213 10:09:20.489285    2828 start.go:93] Provisioning new machine with config: &{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:09:20.489285    2828 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:09:20.493281    2828 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:09:20.493281    2828 start.go:159] libmachine.API.Create for "no-preload-803600" (driver="docker")
	I1213 10:09:20.493281    2828 client.go:173] LocalClient.Create starting
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Decoding PEM data...
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Parsing certificate...
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Decoding PEM data...
	I1213 10:09:20.494287    2828 main.go:143] libmachine: Parsing certificate...
	I1213 10:09:20.501291    2828 cli_runner.go:164] Run: docker network inspect no-preload-803600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:09:20.654708    2828 cli_runner.go:211] docker network inspect no-preload-803600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:09:20.658706    2828 network_create.go:284] running [docker network inspect no-preload-803600] to gather additional debugging logs...
	I1213 10:09:20.658706    2828 cli_runner.go:164] Run: docker network inspect no-preload-803600
	W1213 10:09:20.729091    2828 cli_runner.go:211] docker network inspect no-preload-803600 returned with exit code 1
	I1213 10:09:20.729150    2828 network_create.go:287] error running [docker network inspect no-preload-803600]: docker network inspect no-preload-803600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-803600 not found
	I1213 10:09:20.729205    2828 network_create.go:289] output of [docker network inspect no-preload-803600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-803600 not found
	
	** /stderr **
	I1213 10:09:20.734395    2828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:09:21.223669    2828 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:21.263455    2828 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:21.419303    2828 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:21.527346    2828 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:21.600192    2828 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e11a70}
	I1213 10:09:21.600192    2828 network_create.go:124] attempt to create docker network no-preload-803600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:09:21.606428    2828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600
	W1213 10:09:21.789562    2828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600 returned with exit code 1
	W1213 10:09:21.789562    2828 network_create.go:149] failed to create docker network no-preload-803600 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:09:21.789562    2828 network_create.go:116] failed to create docker network no-preload-803600 192.168.85.0/24, will retry: subnet is taken
	I1213 10:09:21.837595    2828 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:21.876429    2828 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e6a690}
	I1213 10:09:21.876429    2828 network_create.go:124] attempt to create docker network no-preload-803600 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 10:09:21.881423    2828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600
	W1213 10:09:22.131263    2828 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600 returned with exit code 1
	W1213 10:09:22.131263    2828 network_create.go:149] failed to create docker network no-preload-803600 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:09:22.131263    2828 network_create.go:116] failed to create docker network no-preload-803600 192.168.94.0/24, will retry: subnet is taken
	I1213 10:09:22.234058    2828 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:09:22.284389    2828 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed46f0}
	I1213 10:09:22.284389    2828 network_create.go:124] attempt to create docker network no-preload-803600 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1213 10:09:22.289391    2828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-803600 no-preload-803600
	I1213 10:09:22.554562    2828 network_create.go:108] docker network no-preload-803600 192.168.103.0/24 created
	I1213 10:09:22.554562    2828 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-803600" container
	I1213 10:09:22.572560    2828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:09:22.652559    2828 cli_runner.go:164] Run: docker volume create no-preload-803600 --label name.minikube.sigs.k8s.io=no-preload-803600 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:09:22.733370    2828 oci.go:103] Successfully created a docker volume no-preload-803600
	I1213 10:09:22.738387    2828 cli_runner.go:164] Run: docker run --rm --name no-preload-803600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-803600 --entrypoint /usr/bin/test -v no-preload-803600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:09:23.595942    2828 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.595942    2828 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:23.608391    2828 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.609062    2828 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1213 10:09:23.610103    2828 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2971974s
	I1213 10:09:23.610715    2828 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1213 10:09:23.612498    2828 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:23.621066    2828 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.622047    2828 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1213 10:09:23.622047    2828 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3101533s
	I1213 10:09:23.622047    2828 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1213 10:09:23.625060    2828 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.625060    2828 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:23.637067    2828 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:23.664071    2828 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.665054    2828 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1213 10:09:23.665054    2828 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.3521478s
	I1213 10:09:23.665054    2828 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	W1213 10:09:23.686960    2828 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:23.691041    2828 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.691041    2828 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:23.691760    2828 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.691760    2828 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:23.707321    2828 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:23.707321    2828 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:23.737297    2828 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:09:23.738312    2828 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1213 10:09:23.744299    2828 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:23.747297    2828 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1213 10:09:23.793303    2828 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:23.843299    2828 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:23.898040    2828 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:24.113900    2828 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:09:24.117669    2828 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:09:24.139230    2828 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:09:24.177053    2828 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:09:24.180497    2828 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:24.367061    2828 cli_runner.go:217] Completed: docker run --rm --name no-preload-803600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-803600 --entrypoint /usr/bin/test -v no-preload-803600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6286514s)
	I1213 10:09:24.367061    2828 oci.go:107] Successfully prepared a docker volume no-preload-803600
	I1213 10:09:24.367061    2828 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:09:24.372052    2828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:09:24.609204    2828 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:09:24.591127918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:09:24.612172    2828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:09:24.866390    2828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-803600 --name no-preload-803600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-803600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-803600 --network no-preload-803600 --ip 192.168.103.2 --volume no-preload-803600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:09:24.933984    2828 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1213 10:09:24.933984    2828 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 4.6220724s
	I1213 10:09:24.933984    2828 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:09:25.542840    2828 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Running}}
	I1213 10:09:25.608042    2828 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:09:25.674048    2828 cli_runner.go:164] Run: docker exec no-preload-803600 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:09:25.789956    2828 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1213 10:09:25.790317    2828 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.4773812s
	I1213 10:09:25.790317    2828 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1213 10:09:25.818513    2828 oci.go:144] the created container "no-preload-803600" has a running status.
	I1213 10:09:25.818513    2828 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa...
	I1213 10:09:25.832517    2828 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:09:25.832741    2828 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.5208163s
	I1213 10:09:25.832741    2828 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:09:25.931268    2828 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:09:25.931268    2828 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.6183309s
	I1213 10:09:25.931268    2828 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:09:25.963264    2828 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:09:25.963264    2828 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 5.6513374s
	I1213 10:09:25.963264    2828 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:09:25.963264    2828 cache.go:87] Successfully saved all images to host disk.
	I1213 10:09:26.026724    2828 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:09:26.107123    2828 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:09:26.173144    2828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:09:26.173144    2828 kic_runner.go:114] Args: [docker exec --privileged no-preload-803600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:09:26.297145    2828 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa...
	I1213 10:09:28.592632    2828 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:09:28.650046    2828 machine.go:94] provisionDockerMachine start ...
	I1213 10:09:28.654044    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:28.708048    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:28.721037    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:28.721037    2828 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:09:28.911455    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:09:28.911455    2828 ubuntu.go:182] provisioning hostname "no-preload-803600"
	I1213 10:09:28.914661    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:28.975471    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:28.976069    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:28.976069    2828 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-803600 && echo "no-preload-803600" | sudo tee /etc/hostname
	I1213 10:09:29.177813    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:09:29.182421    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:29.249930    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:29.250928    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:29.250928    2828 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-803600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-803600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-803600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:09:29.432500    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:09:29.433037    2828 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:09:29.433112    2828 ubuntu.go:190] setting up certificates
	I1213 10:09:29.433112    2828 provision.go:84] configureAuth start
	I1213 10:09:29.438455    2828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:09:29.497219    2828 provision.go:143] copyHostCerts
	I1213 10:09:29.497219    2828 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:09:29.497219    2828 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:09:29.497219    2828 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:09:29.498214    2828 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:09:29.498214    2828 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:09:29.498214    2828 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:09:29.499217    2828 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:09:29.499217    2828 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:09:29.499217    2828 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:09:29.500229    2828 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-803600 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-803600]
	I1213 10:09:29.547145    2828 provision.go:177] copyRemoteCerts
	I1213 10:09:29.551114    2828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:09:29.556922    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:29.613117    2828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52686 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:09:29.735713    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:09:29.772063    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:09:29.802520    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:09:29.831955    2828 provision.go:87] duration metric: took 398.8378ms to configureAuth
	I1213 10:09:29.832953    2828 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:09:29.832953    2828 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:09:29.835953    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:29.888973    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:29.888973    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:29.889958    2828 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:09:30.071518    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:09:30.071518    2828 ubuntu.go:71] root file system type: overlay
	I1213 10:09:30.072042    2828 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:09:30.075538    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:30.132876    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:30.132876    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:30.132876    2828 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:09:30.324941    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:09:30.327942    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:30.378941    2828 main.go:143] libmachine: Using SSH client type: native
	I1213 10:09:30.379941    2828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52686 <nil> <nil>}
	I1213 10:09:30.379941    2828 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:09:31.819347    2828 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:09:30.321014839 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:09:31.819432    2828 machine.go:97] duration metric: took 3.1693211s to provisionDockerMachine
	I1213 10:09:31.819451    2828 client.go:176] duration metric: took 11.3259955s to LocalClient.Create
	I1213 10:09:31.819451    2828 start.go:167] duration metric: took 11.3260148s to libmachine.API.Create "no-preload-803600"
	I1213 10:09:31.819489    2828 start.go:293] postStartSetup for "no-preload-803600" (driver="docker")
	I1213 10:09:31.819489    2828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:09:31.824261    2828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:09:31.827341    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:31.887792    2828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52686 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:09:32.025891    2828 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:09:32.033042    2828 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:09:32.033042    2828 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:09:32.033042    2828 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:09:32.033042    2828 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:09:32.034007    2828 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:09:32.038588    2828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:09:32.054361    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:09:32.090627    2828 start.go:296] duration metric: took 271.1348ms for postStartSetup
	I1213 10:09:32.096150    2828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:09:32.150002    2828 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:09:32.161355    2828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:09:32.166183    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:32.218157    2828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52686 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:09:32.348901    2828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:09:32.361379    2828 start.go:128] duration metric: took 11.8719323s to createHost
	I1213 10:09:32.361379    2828 start.go:83] releasing machines lock for "no-preload-803600", held for 11.8719323s
	I1213 10:09:32.366391    2828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:09:32.421327    2828 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:09:32.425642    2828 ssh_runner.go:195] Run: cat /version.json
	I1213 10:09:32.425664    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:32.429229    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:32.494604    2828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52686 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:09:32.495605    2828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52686 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	W1213 10:09:32.608893    2828 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:09:32.612907    2828 ssh_runner.go:195] Run: systemctl --version
	I1213 10:09:32.627917    2828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:09:32.635901    2828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:09:32.639894    2828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1213 10:09:32.707222    2828 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:09:32.707222    2828 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:09:32.716217    2828 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:09:32.716217    2828 start.go:496] detecting cgroup driver to use...
	I1213 10:09:32.716217    2828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:09:32.716217    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:09:32.743720    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:09:32.769006    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:09:32.788004    2828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:09:32.793004    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:09:32.815006    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:09:32.834998    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:09:32.855015    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:09:32.881014    2828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:09:32.901994    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:09:32.920001    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:09:32.940996    2828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:09:32.962001    2828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:09:32.985010    2828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:09:33.007006    2828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:09:33.169523    2828 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:09:33.364131    2828 start.go:496] detecting cgroup driver to use...
	I1213 10:09:33.364131    2828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:09:33.369119    2828 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:09:33.396119    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:09:33.417985    2828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:09:33.488036    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:09:33.511421    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:09:33.531460    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:09:33.559625    2828 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:09:33.572616    2828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:09:33.587622    2828 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:09:33.611606    2828 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:09:33.764928    2828 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:09:33.950854    2828 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:09:33.951380    2828 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:09:33.978604    2828 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:09:34.003610    2828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:09:34.157675    2828 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:09:35.158684    2828 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0009952s)
	I1213 10:09:35.162835    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:09:35.186967    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:09:35.213811    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:09:35.242197    2828 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:09:35.403598    2828 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:09:35.560399    2828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:09:35.716014    2828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:09:35.743629    2828 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:09:35.768775    2828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:09:35.911429    2828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:09:36.031252    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:09:36.050855    2828 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:09:36.057860    2828 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:09:36.068866    2828 start.go:564] Will wait 60s for crictl version
	I1213 10:09:36.074866    2828 ssh_runner.go:195] Run: which crictl
	I1213 10:09:36.085852    2828 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:09:36.125863    2828 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:09:36.128851    2828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:09:36.178862    2828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:09:36.225845    2828 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 10:09:36.228859    2828 cli_runner.go:164] Run: docker exec -t no-preload-803600 dig +short host.docker.internal
	I1213 10:09:36.381001    2828 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:09:36.386005    2828 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:09:36.395010    2828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:09:36.419698    2828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:09:36.483923    2828 kubeadm.go:884] updating cluster {Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:09:36.483923    2828 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:09:36.488926    2828 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:09:36.525923    2828 docker.go:691] Got preloaded images: 
	I1213 10:09:36.525923    2828 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1213 10:09:36.525923    2828 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 10:09:36.537921    2828 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:09:36.542939    2828 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:36.547926    2828 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:09:36.547926    2828 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:36.551921    2828 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 10:09:36.554933    2828 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:36.556927    2828 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 10:09:36.567947    2828 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:36.567947    2828 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 10:09:36.570926    2828 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:36.573941    2828 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 10:09:36.573941    2828 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 10:09:36.579931    2828 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:36.579931    2828 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:36.583928    2828 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 10:09:36.587930    2828 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1213 10:09:36.613920    2828 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:36.670933    2828 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:36.740930    2828 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:36.796931    2828 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1213 10:09:36.850932    2828 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:36.894938    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	W1213 10:09:36.919935    2828 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:36.930929    2828 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1213 10:09:36.930929    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:09:36.930929    2828 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:36.933935    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 10:09:36.952925    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 10:09:36.965926    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:09:36.971937    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	W1213 10:09:36.977968    2828 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:36.992929    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 10:09:36.992929    2828 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1213 10:09:36.992929    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:09:36.992929    2828 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 10:09:36.992929    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1213 10:09:36.996936    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 10:09:37.024939    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	W1213 10:09:37.044923    2828 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1213 10:09:37.064936    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1213 10:09:37.069935    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:09:37.074941    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 10:09:37.118375    2828 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1213 10:09:37.118431    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:09:37.118481    2828 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:37.123880    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 10:09:37.133000    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:37.135000    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 10:09:37.135000    2828 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1213 10:09:37.135000    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1213 10:09:37.135000    2828 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 10:09:37.135000    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1213 10:09:37.138993    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1213 10:09:37.184014    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:09:37.187003    2828 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1213 10:09:37.187003    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:37.187003    2828 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:37.192010    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 10:09:37.192010    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1213 10:09:37.194008    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 10:09:37.270260    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1213 10:09:37.283596    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 10:09:37.303465    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:37.331472    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 10:09:37.331472    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:37.331472    2828 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1213 10:09:37.331472    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1213 10:09:37.331472    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1213 10:09:37.331472    2828 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1213 10:09:37.331472    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 10:09:37.331472    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1213 10:09:37.336478    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1213 10:09:37.337476    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:37.379201    2828 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1213 10:09:37.379201    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:09:37.379201    2828 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:37.384186    2828 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 10:09:37.466213    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 10:09:37.467189    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1213 10:09:37.467189    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1213 10:09:37.474206    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 10:09:37.498202    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:09:37.502196    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 10:09:37.575203    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 10:09:37.575203    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1213 10:09:37.600198    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 10:09:37.600198    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1213 10:09:37.665781    2828 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:09:37.782765    2828 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 10:09:37.782765    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1213 10:09:37.845765    2828 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1213 10:09:37.845765    2828 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 10:09:37.845765    2828 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:09:37.850761    2828 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:09:38.042948    2828 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 10:09:38.047950    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 10:09:38.049948    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1213 10:09:38.188944    2828 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 10:09:38.189948    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1213 10:09:38.583560    2828 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 10:09:38.583560    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1213 10:09:43.305894    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (4.7222696s)
	I1213 10:09:43.306886    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1213 10:09:43.306886    2828 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 10:09:43.306886    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1213 10:09:46.735454    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.4285219s)
	I1213 10:09:46.735454    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1213 10:09:46.735454    2828 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 10:09:46.735454    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1213 10:09:49.506890    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (2.7713976s)
	I1213 10:09:49.506890    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1213 10:09:49.506890    2828 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 10:09:49.506890    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1213 10:09:51.482680    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.9757634s)
	I1213 10:09:51.482680    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1213 10:09:51.482680    2828 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 10:09:51.482680    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1213 10:09:52.429372    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1213 10:09:52.429372    2828 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 10:09:52.429372    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1213 10:09:53.925360    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (1.4959672s)
	I1213 10:09:53.925360    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 10:09:53.925360    2828 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 10:09:53.925360    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1213 10:09:55.523407    2828 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.5980245s)
	I1213 10:09:55.523407    2828 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1213 10:09:55.523407    2828 cache_images.go:125] Successfully loaded all cached images
	I1213 10:09:55.523407    2828 cache_images.go:94] duration metric: took 18.9972232s to LoadCachedImages
	I1213 10:09:55.523407    2828 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 docker true true} ...
	I1213 10:09:55.523407    2828 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-803600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:09:55.526928    2828 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:09:55.604564    2828 cni.go:84] Creating CNI manager for ""
	I1213 10:09:55.604564    2828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:09:55.604564    2828 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:09:55.604564    2828 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-803600 NodeName:no-preload-803600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:09:55.604564    2828 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-803600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:09:55.608735    2828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:09:55.621831    2828 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 10:09:55.626164    2828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:09:55.644787    2828 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1213 10:09:55.644787    2828 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1213 10:09:55.644787    2828 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1213 10:09:56.697478    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 10:09:56.707012    2828 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 10:09:56.707628    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1213 10:09:56.733356    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:09:56.773378    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 10:09:56.824366    2828 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 10:09:56.824366    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1213 10:09:56.850361    2828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 10:09:56.898364    2828 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 10:09:56.898364    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1213 10:09:58.638215    2828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:09:58.651227    2828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1213 10:09:58.670228    2828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:09:58.689991    2828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1213 10:09:58.716261    2828 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:09:58.722762    2828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:09:58.744794    2828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:09:58.890817    2828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:09:58.913917    2828 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600 for IP: 192.168.103.2
	I1213 10:09:58.913917    2828 certs.go:195] generating shared ca certs ...
	I1213 10:09:58.913917    2828 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:58.914494    2828 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:09:58.914494    2828 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:09:58.914494    2828 certs.go:257] generating profile certs ...
	I1213 10:09:58.915203    2828 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.key
	I1213 10:09:58.915203    2828 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.crt with IP's: []
	I1213 10:09:59.038559    2828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.crt ...
	I1213 10:09:59.038559    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.crt: {Name:mkedc5e6d98955d90bb4fa60378adfc487746855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.039450    2828 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.key ...
	I1213 10:09:59.039450    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.key: {Name:mkcf167297e2b75a14530a602c5ea31337d3c8fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.040455    2828 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key.e3e76275
	I1213 10:09:59.040852    2828 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt.e3e76275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1213 10:09:59.143030    2828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt.e3e76275 ...
	I1213 10:09:59.144030    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt.e3e76275: {Name:mkf351b8b4e9a89a6ab3ada3f0dc1d353855dfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.144143    2828 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key.e3e76275 ...
	I1213 10:09:59.144143    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key.e3e76275: {Name:mk21148185c6a3e1ffe0549df758a2dd3abe3b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.145129    2828 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt.e3e76275 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt
	I1213 10:09:59.159067    2828 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key.e3e76275 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key
	I1213 10:09:59.160249    2828 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key
	I1213 10:09:59.160249    2828 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.crt with IP's: []
	I1213 10:09:59.236061    2828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.crt ...
	I1213 10:09:59.236061    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.crt: {Name:mkd76bd7da86cd4dc5c9514b055c47f8c450d73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.236609    2828 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key ...
	I1213 10:09:59.236609    2828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key: {Name:mk466108dd8f973e425221dbd61c9ee902525ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:09:59.250528    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:09:59.251182    2828 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:09:59.251182    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:09:59.251182    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:09:59.251182    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:09:59.251817    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:09:59.252007    2828 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:09:59.252278    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:09:59.285021    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:09:59.315244    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:09:59.341827    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:09:59.372467    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:09:59.397961    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:09:59.431345    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:09:59.462947    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:09:59.495495    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:09:59.527613    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:09:59.554764    2828 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:09:59.582185    2828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:09:59.609080    2828 ssh_runner.go:195] Run: openssl version
	I1213 10:09:59.622090    2828 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:09:59.639078    2828 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:09:59.659087    2828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:09:59.666092    2828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:09:59.671085    2828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:09:59.730187    2828 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:09:59.747166    2828 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:09:59.766177    2828 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:09:59.782164    2828 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:09:59.798178    2828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:09:59.806177    2828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:09:59.810173    2828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:09:59.868956    2828 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:09:59.888252    2828 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:09:59.903824    2828 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:09:59.919816    2828 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:09:59.935819    2828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:09:59.942816    2828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:09:59.946816    2828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:09:59.992826    2828 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:10:00.011498    2828 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:10:00.032527    2828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:10:00.041753    2828 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:10:00.041753    2828 kubeadm.go:401] StartCluster: {Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:10:00.048011    2828 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:10:00.084149    2828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:10:00.103157    2828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:10:00.117150    2828 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:10:00.121157    2828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:10:00.133152    2828 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:10:00.133152    2828 kubeadm.go:158] found existing configuration files:
	
	I1213 10:10:00.137149    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:10:00.149158    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:10:00.154155    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:10:00.170151    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:10:00.182153    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:10:00.186150    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:10:00.202161    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:10:00.217160    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:10:00.221155    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:10:00.237153    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:10:00.252371    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:10:00.256316    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:10:00.272318    2828 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:10:00.392384    2828 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:10:00.490739    2828 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:10:00.626313    2828 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:14:02.480861    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:14:02.480861    2828 kubeadm.go:319] 
	I1213 10:14:02.480861    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:14:02.485784    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:14:02.485784    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:14:02.486500    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:14:02.486624    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:14:02.486845    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:14:02.486845    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:14:02.486845    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:14:02.486845    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:14:02.486845    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:14:02.487369    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:14:02.487508    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:14:02.487540    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:14:02.487540    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:14:02.487540    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:14:02.487540    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:14:02.487540    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:14:02.488177    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:14:02.488177    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:14:02.488177    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:14:02.488177    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:14:02.488177    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:14:02.488832    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:14:02.489567    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:14:02.489567    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:14:02.489567    2828 kubeadm.go:319] OS: Linux
	I1213 10:14:02.489567    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:14:02.489567    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:14:02.489567    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:14:02.490095    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:14:02.490164    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:14:02.490276    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:14:02.490276    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:14:02.490276    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:14:02.490276    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:14:02.490805    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:14:02.491121    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:14:02.491121    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:14:02.491121    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:14:02.493887    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:14:02.493887    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:14:02.494541    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:14:02.494541    2828 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:14:02.494541    2828 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:14:02.494541    2828 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:14:02.495181    2828 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:14:02.495181    2828 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:14:02.495181    2828 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1213 10:14:02.495181    2828 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:14:02.495788    2828 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1213 10:14:02.495895    2828 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:14:02.495919    2828 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:14:02.495919    2828 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:14:02.495919    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:14:02.495919    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:14:02.495919    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:14:02.496433    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:14:02.496543    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:14:02.496570    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:14:02.496570    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:14:02.496570    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:14:02.499019    2828 out.go:252]   - Booting up control plane ...
	I1213 10:14:02.499019    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:14:02.499019    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:14:02.499019    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:14:02.499019    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:14:02.499812    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:14:02.499999    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:14:02.500240    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:14:02.500240    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:14:02.500240    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:14:02.500827    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:14:02.500827    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001316258s
	I1213 10:14:02.500827    2828 kubeadm.go:319] 
	I1213 10:14:02.500827    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:14:02.500827    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:14:02.500827    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:14:02.500827    2828 kubeadm.go:319] 
	I1213 10:14:02.501521    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:14:02.501618    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:14:02.501705    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:14:02.501768    2828 kubeadm.go:319] 
	W1213 10:14:02.501950    2828 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-803600] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001316258s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:14:02.506360    2828 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 10:14:02.990475    2828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:14:03.013705    2828 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:14:03.018560    2828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:14:03.032555    2828 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:14:03.033223    2828 kubeadm.go:158] found existing configuration files:
	
	I1213 10:14:03.037228    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:14:03.050876    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:14:03.055292    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:14:03.071598    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:14:03.084339    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:14:03.089402    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:14:03.108064    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:14:03.122390    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:14:03.127803    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:14:03.149423    2828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:14:03.162860    2828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:14:03.167497    2828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:14:03.186678    2828 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:14:03.307847    2828 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:14:03.392361    2828 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:14:03.495245    2828 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:18:04.183403    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:18:04.183403    2828 kubeadm.go:319] 
	I1213 10:18:04.184173    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:18:04.186667    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:18:04.187620    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:18:04.188149    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:18:04.188862    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:18:04.189584    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:18:04.190208    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:18:04.190984    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:18:04.191111    2828 kubeadm.go:319] OS: Linux
	I1213 10:18:04.191174    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:18:04.191286    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:18:04.191402    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:18:04.192202    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:04.192303    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:04.192464    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:04.192542    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:04.194798    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:04.197300    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:04.197357    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:04.197430    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:04.201144    2828 out.go:252]   - Booting up control plane ...
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:04.201899    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001338168s
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.202862    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:403] duration metric: took 8m4.1550359s to StartCluster
	I1213 10:18:04.203562    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:18:04.207228    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:18:04.273383    2828 cri.go:89] found id: ""
	I1213 10:18:04.273383    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.273383    2828 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:18:04.273383    2828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:18:04.277565    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:18:04.322297    2828 cri.go:89] found id: ""
	I1213 10:18:04.322297    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.322367    2828 logs.go:284] No container was found matching "etcd"
	I1213 10:18:04.322367    2828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:18:04.326520    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:18:04.369083    2828 cri.go:89] found id: ""
	I1213 10:18:04.369140    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.369163    2828 logs.go:284] No container was found matching "coredns"
	I1213 10:18:04.369163    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:18:04.373406    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:18:04.421351    2828 cri.go:89] found id: ""
	I1213 10:18:04.421351    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.421351    2828 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:18:04.421351    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:18:04.425824    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:18:04.478322    2828 cri.go:89] found id: ""
	I1213 10:18:04.478322    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.478322    2828 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:18:04.478322    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:18:04.484844    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:18:04.526345    2828 cri.go:89] found id: ""
	I1213 10:18:04.526345    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.526345    2828 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:18:04.526345    2828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:18:04.530940    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:18:04.579137    2828 cri.go:89] found id: ""
	I1213 10:18:04.579137    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.579137    2828 logs.go:284] No container was found matching "kindnet"
	I1213 10:18:04.579137    2828 logs.go:123] Gathering logs for kubelet ...
	I1213 10:18:04.579137    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:18:04.640211    2828 logs.go:123] Gathering logs for dmesg ...
	I1213 10:18:04.640211    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:18:04.678021    2828 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:18:04.678021    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:18:04.767758    2828 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:18:04.767814    2828 logs.go:123] Gathering logs for Docker ...
	I1213 10:18:04.767846    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:18:04.804946    2828 logs.go:123] Gathering logs for container status ...
	I1213 10:18:04.804946    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:18:04.860957    2828 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:18:04.860957    2828 out.go:285] * 
	* 
	W1213 10:18:04.861546    2828 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.861737    2828 out.go:285] * 
	* 
	W1213 10:18:04.863650    2828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:18:04.869031    2828 out.go:203] 
	W1213 10:18:04.871300    2828 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.871300    2828 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:18:04.871300    2828 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:18:04.874442    2828 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:09:25.240761048Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1ed7a1408fdd16408942ad2920ffd10571f40dc038c29f6667e5ed69ec2ea92",
	            "SandboxKey": "/var/run/docker/netns/a1ed7a1408fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52682"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52683"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52685"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "f89c7b01b868d720f5fc06986024a266fce8726dc2b3c53a5ec6b002f8b5ec56",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 6 (584.0355ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:05.902315    5740 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.0870312s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-416400 sudo systemctl status kubelet --all --full --no-pager                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat kubelet --no-pager                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status docker --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat docker --no-pager                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/docker/daemon.json                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo docker system info                                                                         │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat cri-docker --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cri-dockerd --version                                                                      │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status containerd --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat containerd --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/containerd/config.toml                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo containerd config dump                                                                     │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status crio --all --full --no-pager                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │                     │
	│ ssh     │ -p auto-416400 sudo systemctl cat crio --no-pager                                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo crio config                                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ delete  │ -p auto-416400                                                                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:18:00
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:17:59.982121    6436 out.go:360] Setting OutFile to fd 1200 ...
	I1213 10:18:00.024750    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.024821    6436 out.go:374] Setting ErrFile to fd 1736...
	I1213 10:18:00.024821    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.039127    6436 out.go:368] Setting JSON to false
	I1213 10:18:00.042132    6436 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6887,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:18:00.042132    6436 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:18:00.048133    6436 out.go:179] * [kindnet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:18:00.052119    6436 notify.go:221] Checking for updates...
	I1213 10:18:00.054248    6436 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:18:00.056421    6436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:18:00.060745    6436 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:18:00.063186    6436 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:18:00.066370    6436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:18:00.069771    6436 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.070450    6436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:18:00.192644    6436 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:18:00.198649    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.421515    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.403252148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.425087    6436 out.go:179] * Using the docker driver based on user configuration
	I1213 10:18:00.426922    6436 start.go:309] selected driver: docker
	I1213 10:18:00.427003    6436 start.go:927] validating driver "docker" against <nil>
	I1213 10:18:00.427099    6436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:18:00.513356    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.742258    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.726260812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.743264    6436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:18:00.743264    6436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:18:00.746255    6436 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:18:00.748273    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:00.748273    6436 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:18:00.748273    6436 start.go:353] cluster config:
	{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:18:00.750286    6436 out.go:179] * Starting "kindnet-416400" primary control-plane node in "kindnet-416400" cluster
	I1213 10:18:00.754272    6436 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:18:00.757272    6436 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:18:00.759259    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:00.759259    6436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:18:00.760267    6436 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:18:00.760267    6436 cache.go:65] Caching tarball of preloaded images
	I1213 10:18:00.760267    6436 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:18:00.760267    6436 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:18:00.760267    6436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json ...
	I1213 10:18:00.760267    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json: {Name:mkb57822615d533cf4e4f00f9118393a9934e233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:00.831962    6436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:18:00.832560    6436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:18:00.832612    6436 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:18:00.832612    6436 start.go:360] acquireMachinesLock for kindnet-416400: {Name:mk1cbf47b4d1a255d1032f17aad230077b5c0db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:18:00.832612    6436 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-416400"
	I1213 10:18:00.832612    6436 start.go:93] Provisioning new machine with config: &{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:18:00.832612    6436 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:18:04.183403    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:18:04.183403    2828 kubeadm.go:319] 
	I1213 10:18:04.184173    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:18:04.186667    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:18:04.187620    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:18:04.188149    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:18:04.188862    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:18:04.189584    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:18:04.190208    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:18:04.190984    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:18:04.191111    2828 kubeadm.go:319] OS: Linux
	I1213 10:18:04.191174    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:18:04.191286    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:18:04.191402    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:18:04.192202    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:04.192303    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:04.192464    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:04.192542    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:04.194798    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:04.197300    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:04.197357    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:04.197430    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:04.201144    2828 out.go:252]   - Booting up control plane ...
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:04.201899    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001338168s
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.202862    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:403] duration metric: took 8m4.1550359s to StartCluster
	I1213 10:18:04.203562    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:18:04.207228    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:18:04.273383    2828 cri.go:89] found id: ""
	I1213 10:18:04.273383    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.273383    2828 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:18:04.273383    2828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:18:04.277565    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:18:04.322297    2828 cri.go:89] found id: ""
	I1213 10:18:04.322297    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.322367    2828 logs.go:284] No container was found matching "etcd"
	I1213 10:18:04.322367    2828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:18:04.326520    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:18:04.369083    2828 cri.go:89] found id: ""
	I1213 10:18:04.369140    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.369163    2828 logs.go:284] No container was found matching "coredns"
	I1213 10:18:04.369163    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:18:04.373406    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:18:04.421351    2828 cri.go:89] found id: ""
	I1213 10:18:04.421351    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.421351    2828 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:18:04.421351    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:18:04.425824    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:18:04.478322    2828 cri.go:89] found id: ""
	I1213 10:18:04.478322    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.478322    2828 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:18:04.478322    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:18:04.484844    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:18:04.526345    2828 cri.go:89] found id: ""
	I1213 10:18:04.526345    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.526345    2828 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:18:04.526345    2828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:18:04.530940    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:18:04.579137    2828 cri.go:89] found id: ""
	I1213 10:18:04.579137    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.579137    2828 logs.go:284] No container was found matching "kindnet"
	I1213 10:18:04.579137    2828 logs.go:123] Gathering logs for kubelet ...
	I1213 10:18:04.579137    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:18:04.640211    2828 logs.go:123] Gathering logs for dmesg ...
	I1213 10:18:04.640211    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:18:04.678021    2828 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:18:04.678021    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:18:04.767758    2828 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:18:04.767814    2828 logs.go:123] Gathering logs for Docker ...
	I1213 10:18:04.767846    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:18:04.804946    2828 logs.go:123] Gathering logs for container status ...
	I1213 10:18:04.804946    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:18:04.860957    2828 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:18:04.860957    2828 out.go:285] * 
	W1213 10:18:04.861546    2828 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.861737    2828 out.go:285] * 
	W1213 10:18:04.863650    2828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:18:04.869031    2828 out.go:203] 
	W1213 10:18:04.871300    2828 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.871300    2828 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:18:04.871300    2828 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:18:04.874442    2828 out.go:203] 
	I1213 10:18:00.836584    6436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:18:00.837174    6436 start.go:159] libmachine.API.Create for "kindnet-416400" (driver="docker")
	I1213 10:18:00.837174    6436 client.go:173] LocalClient.Create starting
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.838309    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.842081    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:18:00.894842    6436 cli_runner.go:211] docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:18:00.899134    6436 network_create.go:284] running [docker network inspect kindnet-416400] to gather additional debugging logs...
	I1213 10:18:00.899219    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400
	W1213 10:18:00.960344    6436 cli_runner.go:211] docker network inspect kindnet-416400 returned with exit code 1
	I1213 10:18:00.961297    6436 network_create.go:287] error running [docker network inspect kindnet-416400]: docker network inspect kindnet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-416400 not found
	I1213 10:18:00.961297    6436 network_create.go:289] output of [docker network inspect kindnet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-416400 not found
	
	** /stderr **
	I1213 10:18:00.964860    6436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:18:01.049602    6436 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.079976    6436 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.095882    6436 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.111785    6436 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.142668    6436 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.173768    6436 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.189620    6436 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.204043    6436 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018c8600}
	I1213 10:18:01.204043    6436 network_create.go:124] attempt to create docker network kindnet-416400 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1213 10:18:01.210831    6436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-416400 kindnet-416400
	I1213 10:18:01.366696    6436 network_create.go:108] docker network kindnet-416400 192.168.112.0/24 created
	I1213 10:18:01.366696    6436 kic.go:121] calculated static IP "192.168.112.2" for the "kindnet-416400" container
	I1213 10:18:01.376226    6436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:18:01.436190    6436 cli_runner.go:164] Run: docker volume create kindnet-416400 --label name.minikube.sigs.k8s.io=kindnet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:18:01.489173    6436 oci.go:103] Successfully created a docker volume kindnet-416400
	I1213 10:18:01.492179    6436 cli_runner.go:164] Run: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:18:02.838841    6436 cli_runner.go:217] Completed: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.3466433s)
	I1213 10:18:02.838841    6436 oci.go:107] Successfully prepared a docker volume kindnet-416400
	I1213 10:18:02.838841    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:02.838841    6436 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:18:02.843829    6436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> Docker <==
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979577017Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979670526Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979683227Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979688528Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979693828Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979719131Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979754934Z" level=info msg="Initializing buildkit"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.145509829Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154410655Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154649477Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154687681Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154696782Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:09:36 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:06.898189   11022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:06.899212   11022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:06.900424   11022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:06.902155   11022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:06.904679   11022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633049] CPU: 11 PID: 394872 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f6f90941b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f6f90941af6.
	[  +0.000001] RSP: 002b:00007fff4c4a6cf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.821723] CPU: 8 PID: 395025 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f194adc7b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f194adc7af6.
	[  +0.000001] RSP: 002b:00007ffd7d3eb9b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +8.999623] tmpfs: Unknown parameter 'noswap'
	[  +8.764256] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:18:06 up  1:54,  0 user,  load average: 2.33, 3.26, 3.31
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:18:03 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:04 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:18:04 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:04 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:04 no-preload-803600 kubelet[10783]: E1213 10:18:04.465571   10783 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:04 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:04 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:05 no-preload-803600 kubelet[10876]: E1213 10:18:05.248441   10876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:05 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:05 no-preload-803600 kubelet[10902]: E1213 10:18:05.968038   10902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:06 no-preload-803600 kubelet[10969]: E1213 10:18:06.719630   10969 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 6 (558.6378ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:07.643143   14216 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (528.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (516.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m33.5090477s)

                                                
                                                
-- stdout --
	* [newest-cni-307000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-307000" primary control-plane node in "newest-cni-307000" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:11:23.325658    8076 out.go:360] Setting OutFile to fd 1908 ...
	I1213 10:11:23.368477    8076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:23.368477    8076 out.go:374] Setting ErrFile to fd 1144...
	I1213 10:11:23.368477    8076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:23.383565    8076 out.go:368] Setting JSON to false
	I1213 10:11:23.386792    8076 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6490,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:11:23.386792    8076 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:11:23.389943    8076 out.go:179] * [newest-cni-307000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:11:23.393637    8076 notify.go:221] Checking for updates...
	I1213 10:11:23.394993    8076 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:11:23.396988    8076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:11:23.398983    8076 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:11:23.401751    8076 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:11:23.404083    8076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:11:23.406845    8076 config.go:182] Loaded profile config "default-k8s-diff-port-818600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:11:23.406845    8076 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:11:23.407417    8076 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:11:23.407417    8076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:11:23.525167    8076 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:11:23.528160    8076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:23.761094    8076 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:11:23.743777013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:11:23.765092    8076 out.go:179] * Using the docker driver based on user configuration
	I1213 10:11:23.767099    8076 start.go:309] selected driver: docker
	I1213 10:11:23.767099    8076 start.go:927] validating driver "docker" against <nil>
	I1213 10:11:23.767099    8076 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:11:23.820366    8076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:24.089198    8076 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:11:24.069638198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:11:24.089198    8076 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 10:11:24.089198    8076 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 10:11:24.090197    8076 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:11:24.094209    8076 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:11:24.096200    8076 cni.go:84] Creating CNI manager for ""
	I1213 10:11:24.097199    8076 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:11:24.097199    8076 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 10:11:24.097199    8076 start.go:353] cluster config:
	{Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:11:24.102206    8076 out.go:179] * Starting "newest-cni-307000" primary control-plane node in "newest-cni-307000" cluster
	I1213 10:11:24.103202    8076 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:11:24.106202    8076 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:11:24.114211    8076 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:11:24.114211    8076 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:11:24.114211    8076 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 10:11:24.114211    8076 cache.go:65] Caching tarball of preloaded images
	I1213 10:11:24.114211    8076 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:11:24.114211    8076 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 10:11:24.114211    8076 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\config.json ...
	I1213 10:11:24.115199    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\config.json: {Name:mk7babad334bc08a2371515e9fcf11111162407e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:24.182206    8076 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:11:24.182206    8076 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:11:24.182206    8076 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:11:24.182206    8076 start.go:360] acquireMachinesLock for newest-cni-307000: {Name:mkec1c80bf050de750404c276f94aaabab293332 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:11:24.182206    8076 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-307000"
	I1213 10:11:24.183206    8076 start.go:93] Provisioning new machine with config: &{Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:11:24.183206    8076 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:11:24.187199    8076 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:11:24.187199    8076 start.go:159] libmachine.API.Create for "newest-cni-307000" (driver="docker")
	I1213 10:11:24.187199    8076 client.go:173] LocalClient.Create starting
	I1213 10:11:24.187199    8076 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:11:24.188206    8076 main.go:143] libmachine: Decoding PEM data...
	I1213 10:11:24.188206    8076 main.go:143] libmachine: Parsing certificate...
	I1213 10:11:24.188206    8076 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:11:24.188206    8076 main.go:143] libmachine: Decoding PEM data...
	I1213 10:11:24.188206    8076 main.go:143] libmachine: Parsing certificate...
	I1213 10:11:24.192199    8076 cli_runner.go:164] Run: docker network inspect newest-cni-307000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:11:24.238213    8076 cli_runner.go:211] docker network inspect newest-cni-307000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:11:24.241201    8076 network_create.go:284] running [docker network inspect newest-cni-307000] to gather additional debugging logs...
	I1213 10:11:24.241201    8076 cli_runner.go:164] Run: docker network inspect newest-cni-307000
	W1213 10:11:24.288199    8076 cli_runner.go:211] docker network inspect newest-cni-307000 returned with exit code 1
	I1213 10:11:24.288199    8076 network_create.go:287] error running [docker network inspect newest-cni-307000]: docker network inspect newest-cni-307000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-307000 not found
	I1213 10:11:24.289199    8076 network_create.go:289] output of [docker network inspect newest-cni-307000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-307000 not found
	
	** /stderr **
	I1213 10:11:24.292202    8076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:11:24.360201    8076 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:11:24.391948    8076 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:11:24.406126    8076 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017df500}
	I1213 10:11:24.406300    8076 network_create.go:124] attempt to create docker network newest-cni-307000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:11:24.409213    8076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307000 newest-cni-307000
	W1213 10:11:24.461886    8076 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307000 newest-cni-307000 returned with exit code 1
	W1213 10:11:24.461886    8076 network_create.go:149] failed to create docker network newest-cni-307000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307000 newest-cni-307000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:11:24.461886    8076 network_create.go:116] failed to create docker network newest-cni-307000 192.168.67.0/24, will retry: subnet is taken
	I1213 10:11:24.483903    8076 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:11:24.498298    8076 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00174f740}
	I1213 10:11:24.498298    8076 network_create.go:124] attempt to create docker network newest-cni-307000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:11:24.501788    8076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-307000 newest-cni-307000
	I1213 10:11:24.638660    8076 network_create.go:108] docker network newest-cni-307000 192.168.76.0/24 created
	I1213 10:11:24.639191    8076 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-307000" container
	I1213 10:11:24.651265    8076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:11:24.714312    8076 cli_runner.go:164] Run: docker volume create newest-cni-307000 --label name.minikube.sigs.k8s.io=newest-cni-307000 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:11:24.777373    8076 oci.go:103] Successfully created a docker volume newest-cni-307000
	I1213 10:11:24.781776    8076 cli_runner.go:164] Run: docker run --rm --name newest-cni-307000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307000 --entrypoint /usr/bin/test -v newest-cni-307000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:11:26.197520    8076 cli_runner.go:217] Completed: docker run --rm --name newest-cni-307000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307000 --entrypoint /usr/bin/test -v newest-cni-307000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.415679s)
	I1213 10:11:26.197520    8076 oci.go:107] Successfully prepared a docker volume newest-cni-307000
	I1213 10:11:26.198062    8076 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:11:26.198062    8076 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:11:26.202008    8076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:11:37.377062    8076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-307000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (11.1747536s)
	I1213 10:11:37.377124    8076 kic.go:203] duration metric: took 11.178907s to extract preloaded images to volume ...
	I1213 10:11:37.381184    8076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:37.622137    8076 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:98 SystemTime:2025-12-13 10:11:37.599442079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:11:37.627162    8076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:11:37.862231    8076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-307000 --name newest-cni-307000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-307000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-307000 --network newest-cni-307000 --ip 192.168.76.2 --volume newest-cni-307000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:11:38.550312    8076 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Running}}
	I1213 10:11:38.611816    8076 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:11:38.686295    8076 cli_runner.go:164] Run: docker exec newest-cni-307000 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:11:38.800916    8076 oci.go:144] the created container "newest-cni-307000" has a running status.
	I1213 10:11:38.800916    8076 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa...
	I1213 10:11:38.850303    8076 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:11:38.928341    8076 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:11:38.990668    8076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:11:38.990668    8076 kic_runner.go:114] Args: [docker exec --privileged newest-cni-307000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:11:39.108259    8076 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa...
	I1213 10:11:41.271316    8076 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:11:41.323912    8076 machine.go:94] provisionDockerMachine start ...
	I1213 10:11:41.327636    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:41.383866    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:41.397260    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:41.397260    8076 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:11:41.580272    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-307000
	
	I1213 10:11:41.580272    8076 ubuntu.go:182] provisioning hostname "newest-cni-307000"
	I1213 10:11:41.587510    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:41.653853    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:41.653853    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:41.653853    8076 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-307000 && echo "newest-cni-307000" | sudo tee /etc/hostname
	I1213 10:11:41.929663    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-307000
	
	I1213 10:11:41.934416    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:42.016807    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:42.017816    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:42.017816    8076 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:11:42.203511    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:11:42.203511    8076 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:11:42.203511    8076 ubuntu.go:190] setting up certificates
	I1213 10:11:42.203511    8076 provision.go:84] configureAuth start
	I1213 10:11:42.208624    8076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:11:42.261666    8076 provision.go:143] copyHostCerts
	I1213 10:11:42.261666    8076 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:11:42.261666    8076 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:11:42.261666    8076 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:11:42.262672    8076 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:11:42.262672    8076 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:11:42.262672    8076 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:11:42.263666    8076 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:11:42.263666    8076 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:11:42.263666    8076 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:11:42.264664    8076 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-307000 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-307000]
	I1213 10:11:42.360101    8076 provision.go:177] copyRemoteCerts
	I1213 10:11:42.363701    8076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:11:42.366784    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:42.419510    8076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52920 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:11:42.679497    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:11:42.722691    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:11:42.767364    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:11:42.794358    8076 provision.go:87] duration metric: took 590.8385ms to configureAuth
	I1213 10:11:42.794358    8076 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:11:42.794358    8076 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:11:42.797359    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:42.849370    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:42.849370    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:42.849370    8076 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:11:43.292577    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:11:43.292577    8076 ubuntu.go:71] root file system type: overlay
	I1213 10:11:43.292577    8076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:11:43.295905    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:43.350898    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:43.350898    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:43.351909    8076 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:11:43.534319    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:11:43.537314    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:43.589311    8076 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:43.589311    8076 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 52920 <nil> <nil>}
	I1213 10:11:43.589311    8076 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:11:45.242890    8076 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:11:43.520221659 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:11:45.243487    8076 machine.go:97] duration metric: took 3.9195204s to provisionDockerMachine
	I1213 10:11:45.243487    8076 client.go:176] duration metric: took 21.0559963s to LocalClient.Create
	I1213 10:11:45.243487    8076 start.go:167] duration metric: took 21.0559963s to libmachine.API.Create "newest-cni-307000"
	I1213 10:11:45.243487    8076 start.go:293] postStartSetup for "newest-cni-307000" (driver="docker")
	I1213 10:11:45.243487    8076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:11:45.250720    8076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:11:45.255996    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:45.309936    8076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52920 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:11:45.442384    8076 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:11:45.451191    8076 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:11:45.451191    8076 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:11:45.451191    8076 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:11:45.451191    8076 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:11:45.452182    8076 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:11:45.458179    8076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:11:45.474405    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:11:45.501548    8076 start.go:296] duration metric: took 258.0574ms for postStartSetup
	I1213 10:11:45.506545    8076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:11:45.561359    8076 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\config.json ...
	I1213 10:11:45.567539    8076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:11:45.570850    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:45.627444    8076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52920 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:11:45.748163    8076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:11:45.756514    8076 start.go:128] duration metric: took 21.5730087s to createHost
	I1213 10:11:45.756514    8076 start.go:83] releasing machines lock for "newest-cni-307000", held for 21.5740086s
	I1213 10:11:45.760136    8076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:11:45.816770    8076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:11:45.820869    8076 ssh_runner.go:195] Run: cat /version.json
	I1213 10:11:45.820869    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:45.824450    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:45.879713    8076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52920 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:11:45.879713    8076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52920 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:11:46.006737    8076 ssh_runner.go:195] Run: systemctl --version
	W1213 10:11:46.008635    8076 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:11:46.023837    8076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:11:46.031765    8076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:11:46.036631    8076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:11:46.091374    8076 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:11:46.091448    8076 start.go:496] detecting cgroup driver to use...
	I1213 10:11:46.091448    8076 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:11:46.091641    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:11:46.118363    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1213 10:11:46.119635    8076 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:11:46.119635    8076 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:11:46.138545    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:11:46.153953    8076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:11:46.157681    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:11:46.179423    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:11:46.200120    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:11:46.220066    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:11:46.238059    8076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:11:46.255065    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:11:46.273064    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:11:46.291063    8076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:11:46.309064    8076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:11:46.324957    8076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:11:46.343560    8076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:46.502746    8076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:11:46.646238    8076 start.go:496] detecting cgroup driver to use...
	I1213 10:11:46.646238    8076 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:11:46.650952    8076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:11:46.678389    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:11:46.704953    8076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:11:46.772841    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:11:46.797797    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:11:46.818091    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:11:46.847047    8076 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:11:46.858988    8076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:11:46.873603    8076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:11:46.902731    8076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:11:47.050103    8076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:11:47.169002    8076 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:11:47.169002    8076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:11:47.204508    8076 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:11:47.225867    8076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:47.381707    8076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:11:48.269879    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:11:48.300416    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:11:48.329888    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:11:48.357116    8076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:11:48.521887    8076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:11:48.710480    8076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:48.888757    8076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:11:48.914520    8076 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:11:48.939353    8076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:49.093913    8076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:11:49.215235    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:11:49.235526    8076 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:11:49.241104    8076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:11:49.249315    8076 start.go:564] Will wait 60s for crictl version
	I1213 10:11:49.254666    8076 ssh_runner.go:195] Run: which crictl
	I1213 10:11:49.266755    8076 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:11:49.317889    8076 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:11:49.321793    8076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:11:49.383684    8076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:11:49.443678    8076 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 10:11:49.446927    8076 cli_runner.go:164] Run: docker exec -t newest-cni-307000 dig +short host.docker.internal
	I1213 10:11:49.591977    8076 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:11:49.596457    8076 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:11:49.607177    8076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:49.629556    8076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:11:49.684766    8076 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:11:49.686495    8076 kubeadm.go:884] updating cluster {Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:11:49.686495    8076 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:11:49.689696    8076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:11:49.724572    8076 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:11:49.724572    8076 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:11:49.730552    8076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:11:49.768663    8076 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:11:49.769224    8076 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:11:49.769255    8076 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1213 10:11:49.769426    8076 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-307000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:11:49.773488    8076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:11:49.869167    8076 cni.go:84] Creating CNI manager for ""
	I1213 10:11:49.869167    8076 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:11:49.869167    8076 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:11:49.869167    8076 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307000 NodeName:newest-cni-307000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:11:49.869167    8076 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-307000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:11:49.874189    8076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:11:49.892639    8076 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:11:49.895648    8076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:11:49.910270    8076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 10:11:49.934223    8076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:11:49.961473    8076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1213 10:11:49.991171    8076 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:11:50.000662    8076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:50.025450    8076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:50.184919    8076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:11:50.207609    8076 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000 for IP: 192.168.76.2
	I1213 10:11:50.207671    8076 certs.go:195] generating shared ca certs ...
	I1213 10:11:50.207671    8076 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.208404    8076 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:11:50.208735    8076 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:11:50.208887    8076 certs.go:257] generating profile certs ...
	I1213 10:11:50.209286    8076 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.key
	I1213 10:11:50.209335    8076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.crt with IP's: []
	I1213 10:11:50.246881    8076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.crt ...
	I1213 10:11:50.246881    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.crt: {Name:mkcdec4a944853c2711d479bc5170407351ff52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.247134    8076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.key ...
	I1213 10:11:50.247134    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.key: {Name:mka35cd5eba2d53956de1a28a887051d3baac5a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.248716    8076 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key.1d6632be
	I1213 10:11:50.248716    8076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt.1d6632be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 10:11:50.348169    8076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt.1d6632be ...
	I1213 10:11:50.348255    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt.1d6632be: {Name:mke09754597859d4df7b826b1e60099e0d802662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.349297    8076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key.1d6632be ...
	I1213 10:11:50.349356    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key.1d6632be: {Name:mk1038f9588f07607d1502037bc9ae268ba2697f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.350550    8076 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt.1d6632be -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt
	I1213 10:11:50.366690    8076 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key.1d6632be -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key
	I1213 10:11:50.367847    8076 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key
	I1213 10:11:50.367983    8076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.crt with IP's: []
	I1213 10:11:50.583671    8076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.crt ...
	I1213 10:11:50.583671    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.crt: {Name:mk54d5c3f08c86c9265fabc474363f1b6fcef8fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.585011    8076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key ...
	I1213 10:11:50.585011    8076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key: {Name:mk0a169e343370b561b50872be9f9e35f4fe3a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:50.598455    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:11:50.598455    8076 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:11:50.598455    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:11:50.599468    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:11:50.599468    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:11:50.599468    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:11:50.599468    8076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:11:50.600458    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:11:50.628461    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:11:50.661214    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:11:50.704438    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:11:50.734682    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:11:50.764681    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:11:50.790691    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:11:50.820359    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:11:50.857183    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:11:50.891751    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:11:50.925250    8076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:11:50.961282    8076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:11:50.994095    8076 ssh_runner.go:195] Run: openssl version
	I1213 10:11:51.012147    8076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:11:51.033726    8076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:11:51.056295    8076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:11:51.063572    8076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:11:51.067359    8076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:11:51.127602    8076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:11:51.148146    8076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:11:51.167810    8076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:51.185342    8076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:11:51.204870    8076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:51.212939    8076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:51.218760    8076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:51.270370    8076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:11:51.287782    8076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:11:51.306868    8076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:11:51.325200    8076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:11:51.341947    8076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:11:51.348899    8076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:11:51.353195    8076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:11:51.404911    8076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:11:51.421485    8076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:11:51.441542    8076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:11:51.452208    8076 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:11:51.452208    8076 kubeadm.go:401] StartCluster: {Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:11:51.456745    8076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:11:51.502783    8076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:11:51.527796    8076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:11:51.544153    8076 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:11:51.548676    8076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:11:51.563430    8076 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:11:51.563430    8076 kubeadm.go:158] found existing configuration files:
	
	I1213 10:11:51.567427    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:11:51.581589    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:11:51.586368    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:11:51.604125    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:11:51.622442    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:11:51.628376    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:11:51.646655    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:11:51.658885    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:11:51.663189    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:11:51.682703    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:11:51.697979    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:11:51.703854    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:11:51.722908    8076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:11:51.841170    8076 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:11:51.935528    8076 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:11:52.043010    8076 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:15:53.929970    8076 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:15:53.929970    8076 kubeadm.go:319] 
	I1213 10:15:53.930565    8076 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:15:53.936274    8076 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:15:53.936505    8076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:15:53.936901    8076 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:15:53.936901    8076 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:15:53.936901    8076 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:15:53.936901    8076 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:15:53.937621    8076 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:15:53.937621    8076 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:15:53.937621    8076 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:15:53.937621    8076 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:15:53.938263    8076 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:15:53.938454    8076 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:15:53.938631    8076 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:15:53.938733    8076 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:15:53.938997    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:15:53.939305    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:15:53.939518    8076 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:15:53.939809    8076 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:15:53.939936    8076 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:15:53.939936    8076 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:15:53.939936    8076 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:15:53.939936    8076 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:15:53.940587    8076 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:15:53.940774    8076 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:15:53.941098    8076 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:15:53.941098    8076 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:15:53.941098    8076 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:15:53.941098    8076 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:15:53.941098    8076 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:15:53.941627    8076 kubeadm.go:319] OS: Linux
	I1213 10:15:53.941823    8076 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:15:53.941878    8076 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:15:53.941878    8076 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:15:53.941878    8076 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:15:53.941878    8076 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:15:53.942557    8076 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:15:53.942618    8076 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:15:53.942618    8076 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:15:53.942618    8076 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:15:53.943217    8076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:15:53.943251    8076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:15:53.943251    8076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:15:53.943777    8076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:15:53.946208    8076 out.go:252]   - Generating certificates and keys ...
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:15:53.946208    8076 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:15:53.947202    8076 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:15:53.948215    8076 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:15:53.948215    8076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:15:53.949197    8076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:15:53.949197    8076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:15:53.951203    8076 out.go:252]   - Booting up control plane ...
	I1213 10:15:53.952213    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:15:53.952213    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:15:53.952213    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:15:53.952213    8076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:15:53.952213    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:15:53.953223    8076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:15:53.953223    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:15:53.953223    8076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:15:53.953223    8076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:15:53.953223    8076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:15:53.954202    8076 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000854815s
	I1213 10:15:53.954202    8076 kubeadm.go:319] 
	I1213 10:15:53.954202    8076 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:15:53.954202    8076 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:15:53.954202    8076 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:15:53.954202    8076 kubeadm.go:319] 
	I1213 10:15:53.954202    8076 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:15:53.954202    8076 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:15:53.954202    8076 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:15:53.954202    8076 kubeadm.go:319] 
	W1213 10:15:53.955199    8076 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000854815s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-307000] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000854815s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:15:53.959220    8076 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1213 10:15:54.434356    8076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:15:54.458420    8076 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:15:54.462708    8076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:15:54.480656    8076 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:15:54.480656    8076 kubeadm.go:158] found existing configuration files:
	
	I1213 10:15:54.485131    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:15:54.507079    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:15:54.511470    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:15:54.530505    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:15:54.543281    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:15:54.548206    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:15:54.566471    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:15:54.583551    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:15:54.588064    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:15:54.610383    8076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:15:54.629712    8076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:15:54.635695    8076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:15:54.656102    8076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:15:54.781995    8076 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:15:54.865648    8076 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:15:54.965966    8076 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:19:55.944070    8076 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:19:55.944070    8076 kubeadm.go:319] 
	I1213 10:19:55.944070    8076 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:19:55.947579    8076 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:19:55.947579    8076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:19:55.947579    8076 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:19:55.947579    8076 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:19:55.947579    8076 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] OS: Linux
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:19:55.952583    8076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:19:55.958587    8076 out.go:252]   - Generating certificates and keys ...
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:19:55.960576    8076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:19:55.971572    8076 out.go:252]   - Booting up control plane ...
	I1213 10:19:55.971572    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:19:55.973584    8076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:19:55.974581    8076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:19:55.974581    8076 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000719997s
	I1213 10:19:55.974581    8076 kubeadm.go:319] 
	I1213 10:19:55.974581    8076 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:19:55.974581    8076 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:19:55.974581    8076 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:19:55.974581    8076 kubeadm.go:319] 
	I1213 10:19:55.974581    8076 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:19:55.975583    8076 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:19:55.975583    8076 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:19:55.975583    8076 kubeadm.go:319] 
	I1213 10:19:55.975583    8076 kubeadm.go:403] duration metric: took 8m4.5165476s to StartCluster
	I1213 10:19:55.975583    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:19:55.981580    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:19:56.047600    8076 cri.go:89] found id: ""
	I1213 10:19:56.047600    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.047600    8076 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:19:56.047600    8076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:19:56.052579    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:19:56.096883    8076 cri.go:89] found id: ""
	I1213 10:19:56.096883    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.096883    8076 logs.go:284] No container was found matching "etcd"
	I1213 10:19:56.096883    8076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:19:56.101186    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:19:56.155064    8076 cri.go:89] found id: ""
	I1213 10:19:56.155064    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.155064    8076 logs.go:284] No container was found matching "coredns"
	I1213 10:19:56.155064    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:19:56.160080    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:19:56.230080    8076 cri.go:89] found id: ""
	I1213 10:19:56.230080    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.230080    8076 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:19:56.230080    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:19:56.235066    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:19:56.280066    8076 cri.go:89] found id: ""
	I1213 10:19:56.280066    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.280066    8076 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:19:56.280066    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:19:56.284066    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:19:56.353072    8076 cri.go:89] found id: ""
	I1213 10:19:56.353072    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.353072    8076 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:19:56.353072    8076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:19:56.357073    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:19:56.406066    8076 cri.go:89] found id: ""
	I1213 10:19:56.406066    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.406066    8076 logs.go:284] No container was found matching "kindnet"
	I1213 10:19:56.406066    8076 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:19:56.406066    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:19:56.496118    8076 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:19:56.487694   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489010   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489995   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.491141   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.492015   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:19:56.487694   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489010   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489995   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.491141   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.492015   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:19:56.496118    8076 logs.go:123] Gathering logs for Docker ...
	I1213 10:19:56.496118    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:19:56.525110    8076 logs.go:123] Gathering logs for container status ...
	I1213 10:19:56.525110    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:19:56.578958    8076 logs.go:123] Gathering logs for kubelet ...
	I1213 10:19:56.579489    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:19:56.647196    8076 logs.go:123] Gathering logs for dmesg ...
	I1213 10:19:56.647196    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:19:56.684209    8076 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:19:56.684209    8076 out.go:285] * 
	* 
	W1213 10:19:56.684209    8076 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:19:56.685204    8076 out.go:285] * 
	* 
	W1213 10:19:56.686199    8076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:19:56.695194    8076 out.go:203] 
	W1213 10:19:56.697194    8076 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:19:56.698197    8076 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:19:56.698197    8076 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:19:56.700209    8076 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307000
helpers_test.go:244: (dbg) docker inspect newest-cni-307000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e",
	        "Created": "2025-12-13T10:11:37.912113644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:11:38.183095334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e-json.log",
	        "Name": "/newest-cni-307000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-307000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307000",
	                "Source": "/var/lib/docker/volumes/newest-cni-307000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307000",
	                "name.minikube.sigs.k8s.io": "newest-cni-307000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10d63c7cb118c215d26ed42a89aeec2ea240984b20e4abf3bd5096fefb5edd44",
	            "SandboxKey": "/var/run/docker/netns/10d63c7cb118",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52920"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52921"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52923"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-307000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "091d798055d24cd11a8819044665f960a2f1124bb052fb661c5793e42aeec481",
	                    "EndpointID": "c474b750c640cb16671e0143b43f227805c0724bfd0be3d318c79e885a42cae3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307000",
	                        "cc243490f404"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 6 (601.2315ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:19:57.742811    3348 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-307000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25: (1.1320388s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-416400 sudo cat /etc/containerd/config.toml                                                                                     │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo containerd config dump                                                                                              │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status crio --all --full --no-pager                                                                       │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │                     │
	│ ssh     │ -p auto-416400 sudo systemctl cat crio --no-pager                                                                                       │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                             │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo crio config                                                                                                         │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ delete  │ -p auto-416400                                                                                                                          │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker                          │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-803600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-803600 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │                     │
	│ ssh     │ -p kindnet-416400 pgrep -a kubelet                                                                                                      │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ stop    │ -p no-preload-803600 --alsologtostderr -v=3                                                                                             │ no-preload-803600 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ addons  │ enable dashboard -p no-preload-803600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                            │ no-preload-803600 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start   │ -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0    │ no-preload-803600 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/nsswitch.conf                                                                                           │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/hosts                                                                                                   │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/resolv.conf                                                                                             │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo crictl pods                                                                                                      │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo crictl ps --all                                                                                                  │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo ip a s                                                                                                           │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo ip r s                                                                                                           │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo iptables-save                                                                                                    │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo iptables -t nat -L -n -v                                                                                         │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status kubelet --all --full --no-pager                                                                 │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat kubelet --no-pager                                                                                 │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:19:45
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:19:45.348646    8468 out.go:360] Setting OutFile to fd 1724 ...
	I1213 10:19:45.394569    8468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:19:45.394569    8468 out.go:374] Setting ErrFile to fd 1208...
	I1213 10:19:45.394972    8468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:19:45.408028    8468 out.go:368] Setting JSON to false
	I1213 10:19:45.411496    8468 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6992,"bootTime":1765614192,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:19:45.411652    8468 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:19:45.414897    8468 out.go:179] * [no-preload-803600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:19:45.417165    8468 notify.go:221] Checking for updates...
	I1213 10:19:45.419300    8468 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:19:45.421305    8468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:19:45.423304    8468 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:19:45.425291    8468 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:19:45.428296    8468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:19:45.430295    8468 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:19:45.431306    8468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:19:45.544259    8468 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:19:45.547412    8468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:19:45.799291    8468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:19:45.779035639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:19:45.803297    8468 out.go:179] * Using the docker driver based on existing profile
	I1213 10:19:45.805294    8468 start.go:309] selected driver: docker
	I1213 10:19:45.805294    8468 start.go:927] validating driver "docker" against &{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:19:45.805294    8468 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:19:45.896305    8468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:19:46.151757    8468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:19:46.132203366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:19:46.151757    8468 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:19:46.152419    8468 cni.go:84] Creating CNI manager for ""
	I1213 10:19:46.152419    8468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:19:46.152419    8468 start.go:353] cluster config:
	{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:19:46.156414    8468 out.go:179] * Starting "no-preload-803600" primary control-plane node in "no-preload-803600" cluster
	I1213 10:19:46.158422    8468 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:19:46.160414    8468 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:19:46.162414    8468 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:19:46.162414    8468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:19:46.162414    8468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1213 10:19:46.360109    8468 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:19:46.360109    8468 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:19:46.360109    8468 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:19:46.360109    8468 start.go:360] acquireMachinesLock for no-preload-803600: {Name:mkcf862c61e4405506d111940ccf3455664885da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:46.360109    8468 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-803600"
	I1213 10:19:46.360109    8468 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:19:46.360109    8468 fix.go:54] fixHost starting: 
	I1213 10:19:46.378111    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:46.528215    8468 fix.go:112] recreateIfNeeded on no-preload-803600: state=Stopped err=<nil>
	W1213 10:19:46.528215    8468 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:19:46.540200    8468 out.go:252] * Restarting existing docker container for "no-preload-803600" ...
	I1213 10:19:46.544189    8468 cli_runner.go:164] Run: docker start no-preload-803600
	I1213 10:19:47.909943    8468 cli_runner.go:217] Completed: docker start no-preload-803600: (1.3657343s)
	I1213 10:19:47.920065    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:48.200892    8468 kic.go:430] container "no-preload-803600" state is running.
	I1213 10:19:48.208912    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:48.341576    8468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:19:48.344507    8468 machine.go:94] provisionDockerMachine start ...
	I1213 10:19:48.352044    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:48.458965    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:48.459954    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:48.459954    8468 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:19:48.472941    8468 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:19:49.536065    8468 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.536065    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:19:49.536065    8468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.3725997s
	I1213 10:19:49.536065    8468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:19:49.580065    8468 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.580065    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1213 10:19:49.580065    8468 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.416599s
	I1213 10:19:49.580065    8468 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1213 10:19:49.591058    8468 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.592055    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1213 10:19:49.592055    8468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.4285888s
	I1213 10:19:49.592055    8468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1213 10:19:49.592055    8468 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.592055    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1213 10:19:49.592055    8468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.4285888s
	I1213 10:19:49.592055    8468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1213 10:19:49.593059    8468 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.593059    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:19:49.594068    8468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.4306014s
	I1213 10:19:49.594068    8468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:19:49.597052    8468 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.597052    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1213 10:19:49.597052    8468 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.4335857s
	I1213 10:19:49.597052    8468 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1213 10:19:49.642066    8468 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.642066    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1213 10:19:49.642066    8468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.4785984s
	I1213 10:19:49.643055    8468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:19:49.655068    8468 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.655068    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:19:49.656079    8468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.4926116s
	I1213 10:19:49.656079    8468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:19:49.656079    8468 cache.go:87] Successfully saved all images to host disk.
	I1213 10:19:51.655610    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:19:51.655610    8468 ubuntu.go:182] provisioning hostname "no-preload-803600"
	I1213 10:19:51.659825    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:51.725462    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:51.726463    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:51.726463    8468 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-803600 && echo "no-preload-803600" | sudo tee /etc/hostname
	I1213 10:19:51.930842    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:19:51.935838    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:51.999055    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:51.999055    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:51.999055    8468 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-803600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-803600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-803600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:19:52.193801    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:19:52.193801    8468 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:19:52.193801    8468 ubuntu.go:190] setting up certificates
	I1213 10:19:52.193801    8468 provision.go:84] configureAuth start
	I1213 10:19:52.198089    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:52.260779    8468 provision.go:143] copyHostCerts
	I1213 10:19:52.261366    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:19:52.261366    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:19:52.261366    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:19:52.262748    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:19:52.262777    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:19:52.262997    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:19:52.264143    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:19:52.264193    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:19:52.264534    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:19:52.265279    8468 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-803600 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-803600]
	I1213 10:19:52.298901    8468 provision.go:177] copyRemoteCerts
	I1213 10:19:52.302899    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:19:52.305898    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.362435    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:52.510179    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:19:52.543440    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:19:52.578143    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:19:52.617233    8468 provision.go:87] duration metric: took 423.4266ms to configureAuth
	I1213 10:19:52.617785    8468 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:19:52.618358    8468 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:19:52.623872    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.686925    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:52.687445    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:52.687480    8468 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:19:52.872375    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:19:52.872375    8468 ubuntu.go:71] root file system type: overlay
	I1213 10:19:52.872928    8468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:19:52.876824    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.937002    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:52.937976    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:52.938076    8468 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:19:53.158443    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:19:53.163633    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.227440    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:53.228437    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:53.228437    8468 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:19:53.418226    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:19:53.418226    8468 machine.go:97] duration metric: took 5.0736468s to provisionDockerMachine
	I1213 10:19:53.418226    8468 start.go:293] postStartSetup for "no-preload-803600" (driver="docker")
	I1213 10:19:53.418226    8468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:19:53.423070    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:19:53.425911    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.481934    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:53.613891    8468 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:19:53.621539    8468 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:19:53.621539    8468 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:19:53.621539    8468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:19:53.622540    8468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:19:53.622540    8468 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:19:53.627908    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:19:53.644674    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:19:53.683710    8468 start.go:296] duration metric: took 265.4807ms for postStartSetup
	I1213 10:19:53.688461    8468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:19:53.692588    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.748028    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:53.885106    8468 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:19:53.894800    8468 fix.go:56] duration metric: took 7.5345823s for fixHost
	I1213 10:19:53.894800    8468 start.go:83] releasing machines lock for "no-preload-803600", held for 7.5345823s
	I1213 10:19:53.899723    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:53.963324    8468 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:19:53.968697    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.969613    8468 ssh_runner.go:195] Run: cat /version.json
	I1213 10:19:53.975962    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:54.036036    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:54.036036    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	W1213 10:19:54.161183    8468 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:19:54.188871    8468 ssh_runner.go:195] Run: systemctl --version
	I1213 10:19:54.207422    8468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:19:54.219936    8468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:19:54.224766    8468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:19:54.239474    8468 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:19:54.239474    8468 start.go:496] detecting cgroup driver to use...
	I1213 10:19:54.239474    8468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:19:54.239474    8468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:19:54.265468    8468 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:19:54.265468    8468 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:19:54.265468    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:19:54.291479    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:19:54.310876    8468 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:19:54.315529    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:19:54.336055    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:19:54.361908    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:19:54.388301    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:19:54.409976    8468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:19:54.431219    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:19:54.453984    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:19:54.475694    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:19:54.494481    8468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:19:54.510809    8468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:19:54.529069    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:54.685256    8468 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:19:54.852460    8468 start.go:496] detecting cgroup driver to use...
	I1213 10:19:54.852460    8468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:19:54.859898    8468 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:19:54.893855    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:19:54.918575    8468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:19:54.992684    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:19:55.022485    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:19:55.041497    8468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:19:55.072168    8468 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:19:55.086598    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:19:55.099071    8468 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:19:55.125793    8468 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:19:55.290490    8468 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:19:55.944070    8076 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:19:55.944070    8076 kubeadm.go:319] 
	I1213 10:19:55.944070    8076 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:19:55.947579    8076 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:19:55.947579    8076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:19:55.947579    8076 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:19:55.947579    8076 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:19:55.947579    8076 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:19:55.948596    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:19:55.949578    8076 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] OS: Linux
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:19:55.950584    8076 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:19:55.951583    8076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:19:55.952583    8076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:19:55.958587    8076 out.go:252]   - Generating certificates and keys ...
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:19:55.958587    8076 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:19:55.959578    8076 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:19:55.959578    8076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:19:55.960576    8076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:19:55.960576    8076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:19:55.971572    8076 out.go:252]   - Booting up control plane ...
	I1213 10:19:55.971572    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:19:55.972579    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:19:55.973584    8076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:19:55.973584    8076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:19:55.974581    8076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:19:55.974581    8076 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000719997s
	I1213 10:19:55.974581    8076 kubeadm.go:319] 
	I1213 10:19:55.974581    8076 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:19:55.974581    8076 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:19:55.974581    8076 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:19:55.974581    8076 kubeadm.go:319] 
	I1213 10:19:55.974581    8076 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:19:55.975583    8076 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:19:55.975583    8076 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:19:55.975583    8076 kubeadm.go:319] 
	I1213 10:19:55.975583    8076 kubeadm.go:403] duration metric: took 8m4.5165476s to StartCluster
	I1213 10:19:55.975583    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:19:55.981580    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:19:56.047600    8076 cri.go:89] found id: ""
	I1213 10:19:56.047600    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.047600    8076 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:19:56.047600    8076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:19:56.052579    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:19:56.096883    8076 cri.go:89] found id: ""
	I1213 10:19:56.096883    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.096883    8076 logs.go:284] No container was found matching "etcd"
	I1213 10:19:56.096883    8076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:19:56.101186    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:19:56.155064    8076 cri.go:89] found id: ""
	I1213 10:19:56.155064    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.155064    8076 logs.go:284] No container was found matching "coredns"
	I1213 10:19:56.155064    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:19:56.160080    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:19:56.230080    8076 cri.go:89] found id: ""
	I1213 10:19:56.230080    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.230080    8076 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:19:56.230080    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:19:56.235066    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:19:56.280066    8076 cri.go:89] found id: ""
	I1213 10:19:56.280066    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.280066    8076 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:19:56.280066    8076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:19:56.284066    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:19:56.353072    8076 cri.go:89] found id: ""
	I1213 10:19:56.353072    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.353072    8076 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:19:56.353072    8076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:19:56.357073    8076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:19:56.406066    8076 cri.go:89] found id: ""
	I1213 10:19:56.406066    8076 logs.go:282] 0 containers: []
	W1213 10:19:56.406066    8076 logs.go:284] No container was found matching "kindnet"
	I1213 10:19:56.406066    8076 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:19:56.406066    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:19:56.496118    8076 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:19:56.487694   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489010   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489995   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.491141   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.492015   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:19:56.487694   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489010   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.489995   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.491141   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:56.492015   10265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:19:56.496118    8076 logs.go:123] Gathering logs for Docker ...
	I1213 10:19:56.496118    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:19:56.525110    8076 logs.go:123] Gathering logs for container status ...
	I1213 10:19:56.525110    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:19:56.578958    8076 logs.go:123] Gathering logs for kubelet ...
	I1213 10:19:56.579489    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:19:56.647196    8076 logs.go:123] Gathering logs for dmesg ...
	I1213 10:19:56.647196    8076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:19:56.684209    8076 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:19:56.684209    8076 out.go:285] * 
	W1213 10:19:56.684209    8076 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:19:56.685204    8076 out.go:285] * 
	W1213 10:19:56.686199    8076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:19:56.695194    8076 out.go:203] 
	W1213 10:19:56.697194    8076 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000719997s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:19:56.698197    8076 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:19:56.698197    8076 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:19:56.700209    8076 out.go:203] 
	I1213 10:19:55.470123    8468 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:19:55.470123    8468 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:19:55.495115    8468 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:19:55.516116    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:55.678804    8468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:19:56.651196    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:19:56.673194    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:19:56.696194    8468 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 10:19:56.720195    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:19:56.742198    8468 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:19:56.894197    8468 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:19:57.071576    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:57.242586    8468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:19:57.270585    8468 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:19:57.293575    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:57.460207    8468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:19:57.582812    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:19:57.601837    8468 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:19:57.606825    8468 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:19:57.614835    8468 start.go:564] Will wait 60s for crictl version
	I1213 10:19:57.618835    8468 ssh_runner.go:195] Run: which crictl
	I1213 10:19:57.629815    8468 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:19:57.676812    8468 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:19:57.679825    8468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:19:57.727828    8468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	
	==> Docker <==
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116462064Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116551372Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116562473Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116569874Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116575674Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116598777Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116638580Z" level=info msg="Initializing buildkit"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.245496763Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260353344Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260701677Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260766383Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260786285Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:11:48 newest-cni-307000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:11:49 newest-cni-307000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:11:49 newest-cni-307000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:19:58.794769   10445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:58.795961   10445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:58.796889   10445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:58.799226   10445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:58.800217   10445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.822306] CPU: 8 PID: 417127 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f309eb02b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f309eb02af6.
	[  +0.000001] RSP: 002b:00007ffeada4c8d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.951842] CPU: 14 PID: 417302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fecfe6b9b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fecfe6b9af6.
	[  +0.000001] RSP: 002b:00007ffdf6aeb3d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 10:19:58 up  1:56,  0 user,  load average: 3.18, 3.15, 3.25
	Linux newest-cni-307000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:19:55 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:56 newest-cni-307000 kubelet[10200]: E1213 10:19:56.230442   10200 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:56 newest-cni-307000 kubelet[10298]: E1213 10:19:56.956429   10298 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:56 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:57 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:19:57 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:57 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:57 newest-cni-307000 kubelet[10318]: E1213 10:19:57.706990   10318 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:57 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:57 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:58 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:19:58 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:58 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:58 newest-cni-307000 kubelet[10351]: E1213 10:19:58.456818   10351 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:58 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:58 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 6 (599.456ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:19:59.643324   14076 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-307000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-307000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (516.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-803600 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-803600 create -f testdata\busybox.yaml: exit status 1 (91.6928ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-803600" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-803600 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:09:25.240761048Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1ed7a1408fdd16408942ad2920ffd10571f40dc038c29f6667e5ed69ec2ea92",
	            "SandboxKey": "/var/run/docker/netns/a1ed7a1408fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52682"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52683"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52685"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "f89c7b01b868d720f5fc06986024a266fce8726dc2b3c53a5ec6b002f8b5ec56",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 6 (563.5047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:08.367859   14064 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.0913178s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-416400 sudo systemctl status kubelet --all --full --no-pager                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat kubelet --no-pager                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status docker --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat docker --no-pager                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/docker/daemon.json                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo docker system info                                                                         │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat cri-docker --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cri-dockerd --version                                                                      │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status containerd --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat containerd --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/containerd/config.toml                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo containerd config dump                                                                     │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status crio --all --full --no-pager                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │                     │
	│ ssh     │ -p auto-416400 sudo systemctl cat crio --no-pager                                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo crio config                                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ delete  │ -p auto-416400                                                                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:18:00
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:17:59.982121    6436 out.go:360] Setting OutFile to fd 1200 ...
	I1213 10:18:00.024750    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.024821    6436 out.go:374] Setting ErrFile to fd 1736...
	I1213 10:18:00.024821    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.039127    6436 out.go:368] Setting JSON to false
	I1213 10:18:00.042132    6436 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6887,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:18:00.042132    6436 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:18:00.048133    6436 out.go:179] * [kindnet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:18:00.052119    6436 notify.go:221] Checking for updates...
	I1213 10:18:00.054248    6436 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:18:00.056421    6436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:18:00.060745    6436 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:18:00.063186    6436 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:18:00.066370    6436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:18:00.069771    6436 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.070450    6436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:18:00.192644    6436 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:18:00.198649    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.421515    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.403252148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.425087    6436 out.go:179] * Using the docker driver based on user configuration
	I1213 10:18:00.426922    6436 start.go:309] selected driver: docker
	I1213 10:18:00.427003    6436 start.go:927] validating driver "docker" against <nil>
	I1213 10:18:00.427099    6436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:18:00.513356    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.742258    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.726260812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.743264    6436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:18:00.743264    6436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:18:00.746255    6436 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:18:00.748273    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:00.748273    6436 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:18:00.748273    6436 start.go:353] cluster config:
	{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:18:00.750286    6436 out.go:179] * Starting "kindnet-416400" primary control-plane node in "kindnet-416400" cluster
	I1213 10:18:00.754272    6436 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:18:00.757272    6436 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:18:00.759259    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:00.759259    6436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:18:00.760267    6436 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:18:00.760267    6436 cache.go:65] Caching tarball of preloaded images
	I1213 10:18:00.760267    6436 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:18:00.760267    6436 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:18:00.760267    6436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json ...
	I1213 10:18:00.760267    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json: {Name:mkb57822615d533cf4e4f00f9118393a9934e233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:00.831962    6436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:18:00.832560    6436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:18:00.832612    6436 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:18:00.832612    6436 start.go:360] acquireMachinesLock for kindnet-416400: {Name:mk1cbf47b4d1a255d1032f17aad230077b5c0db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:18:00.832612    6436 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-416400"
	I1213 10:18:00.832612    6436 start.go:93] Provisioning new machine with config: &{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:18:00.832612    6436 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:18:04.183403    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:18:04.183403    2828 kubeadm.go:319] 
	I1213 10:18:04.184173    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:18:04.186667    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:18:04.187620    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:18:04.188149    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:18:04.188862    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:18:04.189584    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:18:04.190208    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:18:04.190984    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:18:04.191111    2828 kubeadm.go:319] OS: Linux
	I1213 10:18:04.191174    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:18:04.191286    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:18:04.191402    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:18:04.192202    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:04.192303    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:04.192464    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:04.192542    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:04.194798    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:04.197300    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:04.197357    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:04.197430    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:04.201144    2828 out.go:252]   - Booting up control plane ...
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:04.201899    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001338168s
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.202862    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:403] duration metric: took 8m4.1550359s to StartCluster
	I1213 10:18:04.203562    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:18:04.207228    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:18:04.273383    2828 cri.go:89] found id: ""
	I1213 10:18:04.273383    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.273383    2828 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:18:04.273383    2828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:18:04.277565    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:18:04.322297    2828 cri.go:89] found id: ""
	I1213 10:18:04.322297    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.322367    2828 logs.go:284] No container was found matching "etcd"
	I1213 10:18:04.322367    2828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:18:04.326520    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:18:04.369083    2828 cri.go:89] found id: ""
	I1213 10:18:04.369140    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.369163    2828 logs.go:284] No container was found matching "coredns"
	I1213 10:18:04.369163    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:18:04.373406    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:18:04.421351    2828 cri.go:89] found id: ""
	I1213 10:18:04.421351    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.421351    2828 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:18:04.421351    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:18:04.425824    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:18:04.478322    2828 cri.go:89] found id: ""
	I1213 10:18:04.478322    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.478322    2828 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:18:04.478322    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:18:04.484844    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:18:04.526345    2828 cri.go:89] found id: ""
	I1213 10:18:04.526345    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.526345    2828 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:18:04.526345    2828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:18:04.530940    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:18:04.579137    2828 cri.go:89] found id: ""
	I1213 10:18:04.579137    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.579137    2828 logs.go:284] No container was found matching "kindnet"
	I1213 10:18:04.579137    2828 logs.go:123] Gathering logs for kubelet ...
	I1213 10:18:04.579137    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:18:04.640211    2828 logs.go:123] Gathering logs for dmesg ...
	I1213 10:18:04.640211    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:18:04.678021    2828 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:18:04.678021    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:18:04.767758    2828 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:18:04.767814    2828 logs.go:123] Gathering logs for Docker ...
	I1213 10:18:04.767846    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:18:04.804946    2828 logs.go:123] Gathering logs for container status ...
	I1213 10:18:04.804946    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:18:04.860957    2828 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:18:04.860957    2828 out.go:285] * 
	W1213 10:18:04.861546    2828 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.861737    2828 out.go:285] * 
	W1213 10:18:04.863650    2828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:18:04.869031    2828 out.go:203] 
	W1213 10:18:04.871300    2828 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.871300    2828 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:18:04.871300    2828 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:18:04.874442    2828 out.go:203] 
	I1213 10:18:00.836584    6436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:18:00.837174    6436 start.go:159] libmachine.API.Create for "kindnet-416400" (driver="docker")
	I1213 10:18:00.837174    6436 client.go:173] LocalClient.Create starting
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.838309    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.842081    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:18:00.894842    6436 cli_runner.go:211] docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:18:00.899134    6436 network_create.go:284] running [docker network inspect kindnet-416400] to gather additional debugging logs...
	I1213 10:18:00.899219    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400
	W1213 10:18:00.960344    6436 cli_runner.go:211] docker network inspect kindnet-416400 returned with exit code 1
	I1213 10:18:00.961297    6436 network_create.go:287] error running [docker network inspect kindnet-416400]: docker network inspect kindnet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-416400 not found
	I1213 10:18:00.961297    6436 network_create.go:289] output of [docker network inspect kindnet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-416400 not found
	
	** /stderr **
	I1213 10:18:00.964860    6436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:18:01.049602    6436 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.079976    6436 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.095882    6436 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.111785    6436 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.142668    6436 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.173768    6436 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.189620    6436 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.204043    6436 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018c8600}
	I1213 10:18:01.204043    6436 network_create.go:124] attempt to create docker network kindnet-416400 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1213 10:18:01.210831    6436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-416400 kindnet-416400
	I1213 10:18:01.366696    6436 network_create.go:108] docker network kindnet-416400 192.168.112.0/24 created
	I1213 10:18:01.366696    6436 kic.go:121] calculated static IP "192.168.112.2" for the "kindnet-416400" container
	I1213 10:18:01.376226    6436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:18:01.436190    6436 cli_runner.go:164] Run: docker volume create kindnet-416400 --label name.minikube.sigs.k8s.io=kindnet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:18:01.489173    6436 oci.go:103] Successfully created a docker volume kindnet-416400
	I1213 10:18:01.492179    6436 cli_runner.go:164] Run: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:18:02.838841    6436 cli_runner.go:217] Completed: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.3466433s)
	I1213 10:18:02.838841    6436 oci.go:107] Successfully prepared a docker volume kindnet-416400
	I1213 10:18:02.838841    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:02.838841    6436 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:18:02.843829    6436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> Docker <==
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979577017Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979670526Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979683227Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979688528Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979693828Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979719131Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979754934Z" level=info msg="Initializing buildkit"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.145509829Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154410655Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154649477Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154687681Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154696782Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:09:36 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:09.347112   11206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:09.348280   11206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:09.349233   11206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:09.350430   11206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:09.351473   11206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633049] CPU: 11 PID: 394872 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f6f90941b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f6f90941af6.
	[  +0.000001] RSP: 002b:00007fff4c4a6cf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.821723] CPU: 8 PID: 395025 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f194adc7b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f194adc7af6.
	[  +0.000001] RSP: 002b:00007ffd7d3eb9b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +8.999623] tmpfs: Unknown parameter 'noswap'
	[  +8.764256] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:18:09 up  1:54,  0 user,  load average: 2.38, 3.25, 3.30
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:18:05 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:06 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:06 no-preload-803600 kubelet[10969]: E1213 10:18:06.719630   10969 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:06 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:07 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 10:18:07 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:07 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:07 no-preload-803600 kubelet[11042]: E1213 10:18:07.464772   11042 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:07 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:07 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:08 no-preload-803600 kubelet[11071]: E1213 10:18:08.216450   11071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:08 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:08 no-preload-803600 kubelet[11107]: E1213 10:18:08.971519   11107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:08 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 6 (569.6996ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:10.111535    9944 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:09:25.240761048Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1ed7a1408fdd16408942ad2920ffd10571f40dc038c29f6667e5ed69ec2ea92",
	            "SandboxKey": "/var/run/docker/netns/a1ed7a1408fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52682"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52683"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52685"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "f89c7b01b868d720f5fc06986024a266fce8726dc2b3c53a5ec6b002f8b5ec56",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 6 (606.8565ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:10.792429    9288 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (2.926185s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-416400 sudo systemctl status kubelet --all --full --no-pager                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat kubelet --no-pager                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/kubernetes/kubelet.conf                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /var/lib/kubelet/config.yaml                                                           │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status docker --all --full --no-pager                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat docker --no-pager                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/docker/daemon.json                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo docker system info                                                                         │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status cri-docker --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat cri-docker --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cri-dockerd --version                                                                      │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status containerd --all --full --no-pager                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat containerd --no-pager                                                        │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/containerd/config.toml                                                            │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo containerd config dump                                                                     │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status crio --all --full --no-pager                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │                     │
	│ ssh     │ -p auto-416400 sudo systemctl cat crio --no-pager                                                              │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo crio config                                                                                │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ delete  │ -p auto-416400                                                                                                 │ auto-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:18:00
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:17:59.982121    6436 out.go:360] Setting OutFile to fd 1200 ...
	I1213 10:18:00.024750    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.024821    6436 out.go:374] Setting ErrFile to fd 1736...
	I1213 10:18:00.024821    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.039127    6436 out.go:368] Setting JSON to false
	I1213 10:18:00.042132    6436 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6887,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:18:00.042132    6436 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:18:00.048133    6436 out.go:179] * [kindnet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:18:00.052119    6436 notify.go:221] Checking for updates...
	I1213 10:18:00.054248    6436 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:18:00.056421    6436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:18:00.060745    6436 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:18:00.063186    6436 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:18:00.066370    6436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:18:00.069771    6436 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.070450    6436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:18:00.192644    6436 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:18:00.198649    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.421515    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.403252148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.425087    6436 out.go:179] * Using the docker driver based on user configuration
	I1213 10:18:00.426922    6436 start.go:309] selected driver: docker
	I1213 10:18:00.427003    6436 start.go:927] validating driver "docker" against <nil>
	I1213 10:18:00.427099    6436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:18:00.513356    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.742258    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.726260812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.743264    6436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:18:00.743264    6436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:18:00.746255    6436 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:18:00.748273    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:00.748273    6436 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:18:00.748273    6436 start.go:353] cluster config:
	{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:18:00.750286    6436 out.go:179] * Starting "kindnet-416400" primary control-plane node in "kindnet-416400" cluster
	I1213 10:18:00.754272    6436 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:18:00.757272    6436 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:18:00.759259    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:00.759259    6436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:18:00.760267    6436 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:18:00.760267    6436 cache.go:65] Caching tarball of preloaded images
	I1213 10:18:00.760267    6436 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:18:00.760267    6436 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:18:00.760267    6436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json ...
	I1213 10:18:00.760267    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json: {Name:mkb57822615d533cf4e4f00f9118393a9934e233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:00.831962    6436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:18:00.832560    6436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:18:00.832612    6436 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:18:00.832612    6436 start.go:360] acquireMachinesLock for kindnet-416400: {Name:mk1cbf47b4d1a255d1032f17aad230077b5c0db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:18:00.832612    6436 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-416400"
	I1213 10:18:00.832612    6436 start.go:93] Provisioning new machine with config: &{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:18:00.832612    6436 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:18:04.183403    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:18:04.183403    2828 kubeadm.go:319] 
	I1213 10:18:04.184173    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:18:04.186667    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:18:04.187620    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:18:04.188149    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:18:04.188862    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:18:04.189584    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:18:04.190208    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:18:04.190984    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:18:04.191111    2828 kubeadm.go:319] OS: Linux
	I1213 10:18:04.191174    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:18:04.191286    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:18:04.191402    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:18:04.192202    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:04.192303    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:04.192464    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:04.192542    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:04.194798    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:04.197300    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:04.197357    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:04.197430    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:04.201144    2828 out.go:252]   - Booting up control plane ...
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:04.201899    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001338168s
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.202862    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:403] duration metric: took 8m4.1550359s to StartCluster
	I1213 10:18:04.203562    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:18:04.207228    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:18:04.273383    2828 cri.go:89] found id: ""
	I1213 10:18:04.273383    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.273383    2828 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:18:04.273383    2828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:18:04.277565    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:18:04.322297    2828 cri.go:89] found id: ""
	I1213 10:18:04.322297    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.322367    2828 logs.go:284] No container was found matching "etcd"
	I1213 10:18:04.322367    2828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:18:04.326520    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:18:04.369083    2828 cri.go:89] found id: ""
	I1213 10:18:04.369140    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.369163    2828 logs.go:284] No container was found matching "coredns"
	I1213 10:18:04.369163    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:18:04.373406    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:18:04.421351    2828 cri.go:89] found id: ""
	I1213 10:18:04.421351    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.421351    2828 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:18:04.421351    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:18:04.425824    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:18:04.478322    2828 cri.go:89] found id: ""
	I1213 10:18:04.478322    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.478322    2828 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:18:04.478322    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:18:04.484844    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:18:04.526345    2828 cri.go:89] found id: ""
	I1213 10:18:04.526345    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.526345    2828 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:18:04.526345    2828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:18:04.530940    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:18:04.579137    2828 cri.go:89] found id: ""
	I1213 10:18:04.579137    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.579137    2828 logs.go:284] No container was found matching "kindnet"
	I1213 10:18:04.579137    2828 logs.go:123] Gathering logs for kubelet ...
	I1213 10:18:04.579137    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:18:04.640211    2828 logs.go:123] Gathering logs for dmesg ...
	I1213 10:18:04.640211    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:18:04.678021    2828 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:18:04.678021    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:18:04.767758    2828 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:18:04.767814    2828 logs.go:123] Gathering logs for Docker ...
	I1213 10:18:04.767846    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:18:04.804946    2828 logs.go:123] Gathering logs for container status ...
	I1213 10:18:04.804946    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:18:04.860957    2828 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:18:04.860957    2828 out.go:285] * 
	W1213 10:18:04.861546    2828 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.861737    2828 out.go:285] * 
	W1213 10:18:04.863650    2828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:18:04.869031    2828 out.go:203] 
	W1213 10:18:04.871300    2828 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.871300    2828 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:18:04.871300    2828 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:18:04.874442    2828 out.go:203] 
	I1213 10:18:00.836584    6436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:18:00.837174    6436 start.go:159] libmachine.API.Create for "kindnet-416400" (driver="docker")
	I1213 10:18:00.837174    6436 client.go:173] LocalClient.Create starting
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.838309    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.842081    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:18:00.894842    6436 cli_runner.go:211] docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:18:00.899134    6436 network_create.go:284] running [docker network inspect kindnet-416400] to gather additional debugging logs...
	I1213 10:18:00.899219    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400
	W1213 10:18:00.960344    6436 cli_runner.go:211] docker network inspect kindnet-416400 returned with exit code 1
	I1213 10:18:00.961297    6436 network_create.go:287] error running [docker network inspect kindnet-416400]: docker network inspect kindnet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-416400 not found
	I1213 10:18:00.961297    6436 network_create.go:289] output of [docker network inspect kindnet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-416400 not found
	
	** /stderr **
	I1213 10:18:00.964860    6436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:18:01.049602    6436 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.079976    6436 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.095882    6436 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.111785    6436 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.142668    6436 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.173768    6436 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.189620    6436 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.204043    6436 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018c8600}
	I1213 10:18:01.204043    6436 network_create.go:124] attempt to create docker network kindnet-416400 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1213 10:18:01.210831    6436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-416400 kindnet-416400
	I1213 10:18:01.366696    6436 network_create.go:108] docker network kindnet-416400 192.168.112.0/24 created
	I1213 10:18:01.366696    6436 kic.go:121] calculated static IP "192.168.112.2" for the "kindnet-416400" container
	I1213 10:18:01.376226    6436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:18:01.436190    6436 cli_runner.go:164] Run: docker volume create kindnet-416400 --label name.minikube.sigs.k8s.io=kindnet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:18:01.489173    6436 oci.go:103] Successfully created a docker volume kindnet-416400
	I1213 10:18:01.492179    6436 cli_runner.go:164] Run: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:18:02.838841    6436 cli_runner.go:217] Completed: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.3466433s)
	I1213 10:18:02.838841    6436 oci.go:107] Successfully prepared a docker volume kindnet-416400
	I1213 10:18:02.838841    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:02.838841    6436 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:18:02.843829    6436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> Docker <==
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979577017Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979670526Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979683227Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979688528Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979693828Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979719131Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979754934Z" level=info msg="Initializing buildkit"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.145509829Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154410655Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154649477Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154687681Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154696782Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:09:36 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:13.179035   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:13.180607   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:13.185502   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:13.186552   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:13.187505   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633049] CPU: 11 PID: 394872 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f6f90941b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f6f90941af6.
	[  +0.000001] RSP: 002b:00007fff4c4a6cf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.821723] CPU: 8 PID: 395025 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f194adc7b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f194adc7af6.
	[  +0.000001] RSP: 002b:00007ffd7d3eb9b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +8.999623] tmpfs: Unknown parameter 'noswap'
	[  +8.764256] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:18:13 up  1:54,  0 user,  load average: 2.35, 3.23, 3.30
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:18:10 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:11 no-preload-803600 kubelet[11282]: E1213 10:18:11.221741   11282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:11 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:11 no-preload-803600 kubelet[11370]: E1213 10:18:11.946508   11370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:11 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:12 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 13 10:18:12 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:12 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:12 no-preload-803600 kubelet[11384]: E1213 10:18:12.708547   11384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:12 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:12 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:18:13 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 13 10:18:13 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:13 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:18:13 no-preload-803600 kubelet[11425]: E1213 10:18:13.468827   11425 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:18:13 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:18:13 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 6 (587.4772ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:18:14.402145    3704 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (6.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (88.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-803600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 10:19:01.708920    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-803600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m25.7117486s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_8.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-803600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-803600 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-803600 describe deploy/metrics-server -n kube-system: exit status 1 (87.546ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-803600" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-803600 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:09:25.240761048Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1ed7a1408fdd16408942ad2920ffd10571f40dc038c29f6667e5ed69ec2ea92",
	            "SandboxKey": "/var/run/docker/netns/a1ed7a1408fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52682"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52683"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52685"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "f89c7b01b868d720f5fc06986024a266fce8726dc2b3c53a5ec6b002f8b5ec56",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 6 (588.9101ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:19:40.861631   12640 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.0885248s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-416400 sudo journalctl -xeu kubelet --all --full --no-pager                                                                     │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/kubernetes/kubelet.conf                                                                                    │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /var/lib/kubelet/config.yaml                                                                                    │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status docker --all --full --no-pager                                                                     │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat docker --no-pager                                                                                     │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/docker/daemon.json                                                                                         │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo docker system info                                                                                                  │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status cri-docker --all --full --no-pager                                                                 │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat cri-docker --no-pager                                                                                 │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                            │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                      │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cri-dockerd --version                                                                                               │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status containerd --all --full --no-pager                                                                 │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl cat containerd --no-pager                                                                                 │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /lib/systemd/system/containerd.service                                                                          │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo cat /etc/containerd/config.toml                                                                                     │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo containerd config dump                                                                                              │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo systemctl status crio --all --full --no-pager                                                                       │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │                     │
	│ ssh     │ -p auto-416400 sudo systemctl cat crio --no-pager                                                                                       │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                             │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ ssh     │ -p auto-416400 sudo crio config                                                                                                         │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ delete  │ -p auto-416400                                                                                                                          │ auto-416400       │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker                          │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-803600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-803600 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │                     │
	│ ssh     │ -p kindnet-416400 pgrep -a kubelet                                                                                                      │ kindnet-416400    │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:18:00
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:17:59.982121    6436 out.go:360] Setting OutFile to fd 1200 ...
	I1213 10:18:00.024750    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.024821    6436 out.go:374] Setting ErrFile to fd 1736...
	I1213 10:18:00.024821    6436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:18:00.039127    6436 out.go:368] Setting JSON to false
	I1213 10:18:00.042132    6436 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6887,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:18:00.042132    6436 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:18:00.048133    6436 out.go:179] * [kindnet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:18:00.052119    6436 notify.go:221] Checking for updates...
	I1213 10:18:00.054248    6436 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:18:00.056421    6436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:18:00.060745    6436 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:18:00.063186    6436 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:18:00.066370    6436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:18:00.069771    6436 config.go:182] Loaded profile config "kubernetes-upgrade-481200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.069864    6436 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:18:00.070450    6436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:18:00.192644    6436 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:18:00.198649    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.421515    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.403252148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.425087    6436 out.go:179] * Using the docker driver based on user configuration
	I1213 10:18:00.426922    6436 start.go:309] selected driver: docker
	I1213 10:18:00.427003    6436 start.go:927] validating driver "docker" against <nil>
	I1213 10:18:00.427099    6436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:18:00.513356    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:00.742258    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:00.726260812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:00.743264    6436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:18:00.743264    6436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:18:00.746255    6436 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:18:00.748273    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:00.748273    6436 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:18:00.748273    6436 start.go:353] cluster config:
	{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:18:00.750286    6436 out.go:179] * Starting "kindnet-416400" primary control-plane node in "kindnet-416400" cluster
	I1213 10:18:00.754272    6436 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:18:00.757272    6436 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:18:00.759259    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:00.759259    6436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:18:00.760267    6436 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:18:00.760267    6436 cache.go:65] Caching tarball of preloaded images
	I1213 10:18:00.760267    6436 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:18:00.760267    6436 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:18:00.760267    6436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json ...
	I1213 10:18:00.760267    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json: {Name:mkb57822615d533cf4e4f00f9118393a9934e233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:00.831962    6436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:18:00.832560    6436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:18:00.832612    6436 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:18:00.832612    6436 start.go:360] acquireMachinesLock for kindnet-416400: {Name:mk1cbf47b4d1a255d1032f17aad230077b5c0db7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:18:00.832612    6436 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-416400"
	I1213 10:18:00.832612    6436 start.go:93] Provisioning new machine with config: &{Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:18:00.832612    6436 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:18:04.183403    2828 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:18:04.183403    2828 kubeadm.go:319] 
	I1213 10:18:04.184173    2828 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:18:04.186667    2828 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:04.186667    2828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:18:04.187620    2828 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1213 10:18:04.187620    2828 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1213 10:18:04.188149    2828 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1213 10:18:04.188862    2828 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_INET: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1213 10:18:04.188980    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1213 10:18:04.189584    2828 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1213 10:18:04.189688    2828 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1213 10:18:04.190208    2828 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1213 10:18:04.190389    2828 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1213 10:18:04.190984    2828 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1213 10:18:04.191111    2828 kubeadm.go:319] OS: Linux
	I1213 10:18:04.191174    2828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:18:04.191286    2828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:18:04.191402    2828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:18:04.191585    2828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:18:04.192202    2828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:04.192303    2828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:04.192464    2828 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:04.192542    2828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:04.194798    2828 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:18:04.194947    2828 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:18:04.196221    2828 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:18:04.196778    2828 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:04.196778    2828 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:04.197300    2828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:04.197357    2828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:04.197430    2828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:04.197430    2828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:04.201144    2828 out.go:252]   - Booting up control plane ...
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:04.201307    2828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:04.201899    2828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:04.201899    2828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:04.202862    2828 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001338168s
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.202862    2828 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:18:04.202862    2828 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:18:04.202862    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:18:04.203562    2828 kubeadm.go:319] 
	I1213 10:18:04.203562    2828 kubeadm.go:403] duration metric: took 8m4.1550359s to StartCluster
	I1213 10:18:04.203562    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:18:04.207228    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:18:04.273383    2828 cri.go:89] found id: ""
	I1213 10:18:04.273383    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.273383    2828 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:18:04.273383    2828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:18:04.277565    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:18:04.322297    2828 cri.go:89] found id: ""
	I1213 10:18:04.322297    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.322367    2828 logs.go:284] No container was found matching "etcd"
	I1213 10:18:04.322367    2828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:18:04.326520    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:18:04.369083    2828 cri.go:89] found id: ""
	I1213 10:18:04.369140    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.369163    2828 logs.go:284] No container was found matching "coredns"
	I1213 10:18:04.369163    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:18:04.373406    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:18:04.421351    2828 cri.go:89] found id: ""
	I1213 10:18:04.421351    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.421351    2828 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:18:04.421351    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:18:04.425824    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:18:04.478322    2828 cri.go:89] found id: ""
	I1213 10:18:04.478322    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.478322    2828 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:18:04.478322    2828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:18:04.484844    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:18:04.526345    2828 cri.go:89] found id: ""
	I1213 10:18:04.526345    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.526345    2828 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:18:04.526345    2828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:18:04.530940    2828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:18:04.579137    2828 cri.go:89] found id: ""
	I1213 10:18:04.579137    2828 logs.go:282] 0 containers: []
	W1213 10:18:04.579137    2828 logs.go:284] No container was found matching "kindnet"
	I1213 10:18:04.579137    2828 logs.go:123] Gathering logs for kubelet ...
	I1213 10:18:04.579137    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:18:04.640211    2828 logs.go:123] Gathering logs for dmesg ...
	I1213 10:18:04.640211    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:18:04.678021    2828 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:18:04.678021    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:18:04.767758    2828 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:18:04.755802   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.756711   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.759289   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.760591   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:18:04.761734   10848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:18:04.767814    2828 logs.go:123] Gathering logs for Docker ...
	I1213 10:18:04.767846    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:18:04.804946    2828 logs.go:123] Gathering logs for container status ...
	I1213 10:18:04.804946    2828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:18:04.860957    2828 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:18:04.860957    2828 out.go:285] * 
	W1213 10:18:04.861546    2828 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.861737    2828 out.go:285] * 
	W1213 10:18:04.863650    2828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:18:04.869031    2828 out.go:203] 
	W1213 10:18:04.871300    2828 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001338168s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:18:04.871300    2828 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:18:04.871300    2828 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:18:04.874442    2828 out.go:203] 
	I1213 10:18:00.836584    6436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:18:00.837174    6436 start.go:159] libmachine.API.Create for "kindnet-416400" (driver="docker")
	I1213 10:18:00.837174    6436 client.go:173] LocalClient.Create starting
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:18:00.837789    6436 main.go:143] libmachine: Decoding PEM data...
	I1213 10:18:00.838309    6436 main.go:143] libmachine: Parsing certificate...
	I1213 10:18:00.842081    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:18:00.894842    6436 cli_runner.go:211] docker network inspect kindnet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:18:00.899134    6436 network_create.go:284] running [docker network inspect kindnet-416400] to gather additional debugging logs...
	I1213 10:18:00.899219    6436 cli_runner.go:164] Run: docker network inspect kindnet-416400
	W1213 10:18:00.960344    6436 cli_runner.go:211] docker network inspect kindnet-416400 returned with exit code 1
	I1213 10:18:00.961297    6436 network_create.go:287] error running [docker network inspect kindnet-416400]: docker network inspect kindnet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-416400 not found
	I1213 10:18:00.961297    6436 network_create.go:289] output of [docker network inspect kindnet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-416400 not found
	
	** /stderr **
	I1213 10:18:00.964860    6436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:18:01.049602    6436 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.079976    6436 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.095882    6436 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.111785    6436 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.142668    6436 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.173768    6436 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.189620    6436 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:18:01.204043    6436 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018c8600}
	I1213 10:18:01.204043    6436 network_create.go:124] attempt to create docker network kindnet-416400 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1213 10:18:01.210831    6436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-416400 kindnet-416400
	I1213 10:18:01.366696    6436 network_create.go:108] docker network kindnet-416400 192.168.112.0/24 created
	I1213 10:18:01.366696    6436 kic.go:121] calculated static IP "192.168.112.2" for the "kindnet-416400" container
	I1213 10:18:01.376226    6436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:18:01.436190    6436 cli_runner.go:164] Run: docker volume create kindnet-416400 --label name.minikube.sigs.k8s.io=kindnet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:18:01.489173    6436 oci.go:103] Successfully created a docker volume kindnet-416400
	I1213 10:18:01.492179    6436 cli_runner.go:164] Run: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:18:02.838841    6436 cli_runner.go:217] Completed: docker run --rm --name kindnet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --entrypoint /usr/bin/test -v kindnet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.3466433s)
	I1213 10:18:02.838841    6436 oci.go:107] Successfully prepared a docker volume kindnet-416400
	I1213 10:18:02.838841    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:02.838841    6436 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:18:02.843829    6436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:18:18.051361    6436 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.2073153s)
	I1213 10:18:18.051361    6436 kic.go:203] duration metric: took 15.2123035s to extract preloaded images to volume ...
	I1213 10:18:18.055464    6436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:18:18.296935    6436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:18:18.276763617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:18:18.301751    6436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:18:18.541471    6436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-416400 --name kindnet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-416400 --network kindnet-416400 --ip 192.168.112.2 --volume kindnet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:18:19.203480    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Running}}
	I1213 10:18:19.259989    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:19.311986    6436 cli_runner.go:164] Run: docker exec kindnet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:18:19.424835    6436 oci.go:144] the created container "kindnet-416400" has a running status.
	I1213 10:18:19.424835    6436 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa...
	I1213 10:18:19.487392    6436 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:18:19.560522    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:19.619518    6436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:18:19.619518    6436 kic_runner.go:114] Args: [docker exec --privileged kindnet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:18:19.734523    6436 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa...
	I1213 10:18:21.825951    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:21.885412    6436 machine.go:94] provisionDockerMachine start ...
	I1213 10:18:21.887946    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:21.955646    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:21.971968    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:21.972035    6436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:18:22.145972    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-416400
	
	I1213 10:18:22.145972    6436 ubuntu.go:182] provisioning hostname "kindnet-416400"
	I1213 10:18:22.149516    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:22.206988    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:22.207027    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:22.207027    6436 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-416400 && echo "kindnet-416400" | sudo tee /etc/hostname
	I1213 10:18:22.392993    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-416400
	
	I1213 10:18:22.399126    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:22.457228    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:22.457291    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:22.457291    6436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:18:22.649172    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:18:22.649172    6436 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:18:22.649172    6436 ubuntu.go:190] setting up certificates
	I1213 10:18:22.649172    6436 provision.go:84] configureAuth start
	I1213 10:18:22.652836    6436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-416400
	I1213 10:18:22.709455    6436 provision.go:143] copyHostCerts
	I1213 10:18:22.709544    6436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:18:22.709544    6436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:18:22.710198    6436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:18:22.710801    6436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:18:22.710801    6436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:18:22.711386    6436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:18:22.711971    6436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:18:22.711971    6436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:18:22.711971    6436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:18:22.713330    6436 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-416400 san=[127.0.0.1 192.168.112.2 kindnet-416400 localhost minikube]
	I1213 10:18:22.849753    6436 provision.go:177] copyRemoteCerts
	I1213 10:18:22.853470    6436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:18:22.856894    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:22.909885    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:23.054936    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:18:23.085517    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:18:23.117144    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:18:23.143707    6436 provision.go:87] duration metric: took 494.5271ms to configureAuth
	I1213 10:18:23.143707    6436 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:18:23.144223    6436 config.go:182] Loaded profile config "kindnet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:18:23.147758    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:23.204472    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:23.204472    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:23.204472    6436 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:18:23.390196    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:18:23.390196    6436 ubuntu.go:71] root file system type: overlay
	I1213 10:18:23.390196    6436 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:18:23.393986    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:23.448987    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:23.449712    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:23.449712    6436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:18:23.654058    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:18:23.658124    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:23.715429    6436 main.go:143] libmachine: Using SSH client type: native
	I1213 10:18:23.715803    6436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53379 <nil> <nil>}
	I1213 10:18:23.715803    6436 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:18:25.216298    6436 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:18:23.649934051 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:18:25.216298    6436 machine.go:97] duration metric: took 3.3308383s to provisionDockerMachine
	I1213 10:18:25.216298    6436 client.go:176] duration metric: took 24.3787777s to LocalClient.Create
	I1213 10:18:25.216298    6436 start.go:167] duration metric: took 24.3787777s to libmachine.API.Create "kindnet-416400"
	I1213 10:18:25.216298    6436 start.go:293] postStartSetup for "kindnet-416400" (driver="docker")
	I1213 10:18:25.216298    6436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:18:25.222181    6436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:18:25.225165    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:25.279201    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:25.408628    6436 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:18:25.418325    6436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:18:25.418376    6436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:18:25.418376    6436 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:18:25.418376    6436 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:18:25.419614    6436 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:18:25.426186    6436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:18:25.440297    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:18:25.471262    6436 start.go:296] duration metric: took 254.9605ms for postStartSetup
	I1213 10:18:25.477452    6436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-416400
	I1213 10:18:25.532544    6436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\config.json ...
	I1213 10:18:25.539538    6436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:18:25.542871    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:25.596156    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:25.726962    6436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:18:25.736872    6436 start.go:128] duration metric: took 24.9039069s to createHost
	I1213 10:18:25.736872    6436 start.go:83] releasing machines lock for "kindnet-416400", held for 24.9039069s
	I1213 10:18:25.741379    6436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-416400
	I1213 10:18:25.796471    6436 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:18:25.801185    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:25.801246    6436 ssh_runner.go:195] Run: cat /version.json
	I1213 10:18:25.804539    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:25.854742    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:25.857971    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	W1213 10:18:25.994563    6436 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:18:25.999686    6436 ssh_runner.go:195] Run: systemctl --version
	I1213 10:18:26.021474    6436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:18:26.031809    6436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:18:26.036765    6436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:18:26.089055    6436 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:18:26.089055    6436 start.go:496] detecting cgroup driver to use...
	I1213 10:18:26.089055    6436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:18:26.089055    6436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:18:26.114904    6436 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:18:26.114904    6436 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:18:26.117645    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:18:26.141438    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:18:26.157448    6436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:18:26.162966    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:18:26.185388    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:18:26.203387    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:18:26.222181    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:18:26.243918    6436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:18:26.263533    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:18:26.282021    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:18:26.300472    6436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:18:26.318951    6436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:18:26.338197    6436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:18:26.355934    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:26.515974    6436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:18:26.678919    6436 start.go:496] detecting cgroup driver to use...
	I1213 10:18:26.678919    6436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:18:26.683641    6436 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:18:26.711207    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:18:26.734145    6436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:18:26.804219    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:18:26.826924    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:18:26.845891    6436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:18:26.873675    6436 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:18:26.885295    6436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:18:26.901543    6436 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:18:26.927054    6436 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:18:27.074868    6436 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:18:27.220639    6436 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:18:27.220639    6436 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:18:27.247352    6436 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:18:27.269754    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:27.406145    6436 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:18:28.338614    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:18:28.361878    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:18:28.386758    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:18:28.411570    6436 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:18:28.566008    6436 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:18:28.705997    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:28.844726    6436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:18:28.872276    6436 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:18:28.897690    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:29.050328    6436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:18:29.160315    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:18:29.179685    6436 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:18:29.183577    6436 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:18:29.193318    6436 start.go:564] Will wait 60s for crictl version
	I1213 10:18:29.197925    6436 ssh_runner.go:195] Run: which crictl
	I1213 10:18:29.211339    6436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:18:29.250132    6436 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:18:29.253051    6436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:18:29.297904    6436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:18:29.346127    6436 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:18:29.349206    6436 cli_runner.go:164] Run: docker exec -t kindnet-416400 dig +short host.docker.internal
	I1213 10:18:29.481324    6436 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:18:29.485462    6436 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:18:29.492970    6436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:18:29.513099    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:29.567532    6436 kubeadm.go:884] updating cluster {Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:18:29.567532    6436 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:18:29.571976    6436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:18:29.603069    6436 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:18:29.603069    6436 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:18:29.606775    6436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:18:29.636795    6436 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:18:29.636873    6436 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:18:29.636873    6436 kubeadm.go:935] updating node { 192.168.112.2 8443 v1.34.2 docker true true} ...
	I1213 10:18:29.637182    6436 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1213 10:18:29.640640    6436 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:18:29.717246    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:29.717246    6436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:18:29.717246    6436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-416400 NodeName:kindnet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:18:29.717768    6436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.112.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:18:29.722022    6436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:18:29.735466    6436 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:18:29.739653    6436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:18:29.752920    6436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1213 10:18:29.771785    6436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:18:29.793195    6436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 10:18:29.820404    6436 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:18:29.827776    6436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:18:29.846433    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:29.993539    6436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:18:30.015546    6436 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400 for IP: 192.168.112.2
	I1213 10:18:30.015546    6436 certs.go:195] generating shared ca certs ...
	I1213 10:18:30.015546    6436 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.019869    6436 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:18:30.020451    6436 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:18:30.020579    6436 certs.go:257] generating profile certs ...
	I1213 10:18:30.021180    6436 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.key
	I1213 10:18:30.021313    6436 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.crt with IP's: []
	I1213 10:18:30.146477    6436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.crt ...
	I1213 10:18:30.146477    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.crt: {Name:mkd6637358729139ecfd576159aa799f74601ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.147744    6436 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.key ...
	I1213 10:18:30.147744    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\client.key: {Name:mk7b2f1c0cfb7d3236676fe22a6a2e02321660f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.148609    6436 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key.79c939fa
	I1213 10:18:30.148609    6436 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt.79c939fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.112.2]
	I1213 10:18:30.194193    6436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt.79c939fa ...
	I1213 10:18:30.194193    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt.79c939fa: {Name:mkcf000d01e57652eb1d696f74641e296dcc6b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.195200    6436 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key.79c939fa ...
	I1213 10:18:30.195200    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key.79c939fa: {Name:mk98bec7ea6abeaacabc6e2d59d485c678ef44df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.196188    6436 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt.79c939fa -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt
	I1213 10:18:30.210043    6436 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key.79c939fa -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key
	I1213 10:18:30.210921    6436 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.key
	I1213 10:18:30.211002    6436 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.crt with IP's: []
	I1213 10:18:30.379995    6436 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.crt ...
	I1213 10:18:30.379995    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.crt: {Name:mka99b6fd95f4a3ef90cf74ccf4a634d972286bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.381995    6436 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.key ...
	I1213 10:18:30.381995    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.key: {Name:mk161cd913d229dfc73844b23a418bcc9ecaf3fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:30.396619    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:18:30.397154    6436 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:18:30.397201    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:18:30.397367    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:18:30.397569    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:18:30.397753    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:18:30.397920    6436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:18:30.398195    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:18:30.428573    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:18:30.456424    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:18:30.486311    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:18:30.512520    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:18:30.541869    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:18:30.572657    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:18:30.598814    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:18:30.627499    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:18:30.660047    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:18:30.688072    6436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:18:30.715069    6436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:18:30.739838    6436 ssh_runner.go:195] Run: openssl version
	I1213 10:18:30.753158    6436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:18:30.771203    6436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:18:30.791155    6436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:18:30.801764    6436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:18:30.806025    6436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:18:30.854932    6436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:18:30.872920    6436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:18:30.890571    6436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:18:30.909121    6436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:18:30.930538    6436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:18:30.942190    6436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:18:30.946588    6436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:18:30.997209    6436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:18:31.013683    6436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:18:31.031341    6436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:18:31.048049    6436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:18:31.063936    6436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:18:31.071438    6436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:18:31.075569    6436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:18:31.124453    6436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:18:31.141819    6436 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:18:31.158065    6436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:18:31.164591    6436 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:18:31.164591    6436 kubeadm.go:401] StartCluster: {Name:kindnet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:18:31.169895    6436 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:18:31.204095    6436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:18:31.222020    6436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:18:31.235100    6436 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:18:31.239276    6436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:18:31.252375    6436 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:18:31.252439    6436 kubeadm.go:158] found existing configuration files:
	
	I1213 10:18:31.256508    6436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:18:31.271052    6436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:18:31.275722    6436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:18:31.294577    6436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:18:31.307122    6436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:18:31.311875    6436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:18:31.328714    6436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:18:31.341055    6436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:18:31.345183    6436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:18:31.364504    6436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:18:31.377263    6436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:18:31.382542    6436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:18:31.400203    6436 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:18:31.463155    6436 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:18:31.463155    6436 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:18:31.619445    6436 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:18:31.620047    6436 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:18:31.620047    6436 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:18:31.643102    6436 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:18:31.646478    6436 out.go:252]   - Generating certificates and keys ...
	I1213 10:18:31.646697    6436 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:18:31.646867    6436 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:18:31.807647    6436 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:18:32.189289    6436 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:18:32.269975    6436 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:18:32.493809    6436 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:18:33.070308    6436 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:18:33.070842    6436 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-416400 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1213 10:18:33.241112    6436 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:18:33.241112    6436 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-416400 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1213 10:18:33.946732    6436 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:18:34.996569    6436 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:18:35.178469    6436 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:18:35.178469    6436 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:18:35.502286    6436 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:18:35.718787    6436 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:18:35.973817    6436 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:18:36.013580    6436 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:18:36.097257    6436 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:18:36.097383    6436 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:18:36.103695    6436 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:18:36.110302    6436 out.go:252]   - Booting up control plane ...
	I1213 10:18:36.110302    6436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:18:36.110927    6436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:18:36.110927    6436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:18:36.132499    6436 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:18:36.132499    6436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:18:36.143585    6436 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:18:36.144157    6436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:18:36.144240    6436 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:18:36.305169    6436 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:18:36.305779    6436 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:18:36.807019    6436 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.098064ms
	I1213 10:18:36.815000    6436 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:18:36.815000    6436 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.112.2:8443/livez
	I1213 10:18:36.815000    6436 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:18:36.815000    6436 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:18:41.310393    6436 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.495143419s
	I1213 10:18:42.512400    6436 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.697639215s
	I1213 10:18:44.816527    6436 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002007574s
	I1213 10:18:44.840692    6436 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:18:44.868908    6436 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:18:44.890082    6436 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:18:44.890617    6436 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:18:44.902595    6436 kubeadm.go:319] [bootstrap-token] Using token: kv3rgb.csl1tr961w4ueyzl
	I1213 10:18:44.907006    6436 out.go:252]   - Configuring RBAC rules ...
	I1213 10:18:44.907339    6436 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:18:44.918250    6436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:18:44.931519    6436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:18:44.936521    6436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:18:44.942513    6436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:18:44.948522    6436 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:18:45.228497    6436 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:18:45.661569    6436 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:18:46.226249    6436 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:18:46.229316    6436 kubeadm.go:319] 
	I1213 10:18:46.229316    6436 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:18:46.229316    6436 kubeadm.go:319] 
	I1213 10:18:46.229977    6436 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:18:46.230074    6436 kubeadm.go:319] 
	I1213 10:18:46.230127    6436 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:18:46.230312    6436 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:18:46.230459    6436 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:18:46.230459    6436 kubeadm.go:319] 
	I1213 10:18:46.230668    6436 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:18:46.230668    6436 kubeadm.go:319] 
	I1213 10:18:46.230718    6436 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:18:46.230718    6436 kubeadm.go:319] 
	I1213 10:18:46.230718    6436 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:18:46.230718    6436 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:18:46.231261    6436 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:18:46.231324    6436 kubeadm.go:319] 
	I1213 10:18:46.231553    6436 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:18:46.231553    6436 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:18:46.231553    6436 kubeadm.go:319] 
	I1213 10:18:46.231553    6436 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kv3rgb.csl1tr961w4ueyzl \
	I1213 10:18:46.232219    6436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:18:46.232298    6436 kubeadm.go:319] 	--control-plane 
	I1213 10:18:46.232376    6436 kubeadm.go:319] 
	I1213 10:18:46.232504    6436 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:18:46.232504    6436 kubeadm.go:319] 
	I1213 10:18:46.232504    6436 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kv3rgb.csl1tr961w4ueyzl \
	I1213 10:18:46.232504    6436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:18:46.236736    6436 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:18:46.236736    6436 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:18:46.237427    6436 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:18:46.237496    6436 cni.go:84] Creating CNI manager for "kindnet"
	I1213 10:18:46.239395    6436 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 10:18:46.246568    6436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:18:46.255937    6436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:18:46.255937    6436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 10:18:46.280345    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:18:46.581481    6436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:18:46.586307    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:46.586307    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-416400 minikube.k8s.io/updated_at=2025_12_13T10_18_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kindnet-416400 minikube.k8s.io/primary=true
	I1213 10:18:46.597424    6436 ops.go:34] apiserver oom_adj: -16
	I1213 10:18:46.728915    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:47.229312    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:47.729359    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:48.230273    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:48.728324    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:49.228639    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:49.728953    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:50.229608    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:50.729008    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:51.229114    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:51.728075    6436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:18:51.929882    6436 kubeadm.go:1114] duration metric: took 5.348325s to wait for elevateKubeSystemPrivileges
	I1213 10:18:51.929882    6436 kubeadm.go:403] duration metric: took 20.7649958s to StartCluster
	I1213 10:18:51.929882    6436 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:51.929882    6436 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:18:51.931864    6436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:18:51.932403    6436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:18:51.932403    6436 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:18:51.933003    6436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:18:51.933079    6436 addons.go:70] Setting storage-provisioner=true in profile "kindnet-416400"
	I1213 10:18:51.933079    6436 addons.go:239] Setting addon storage-provisioner=true in "kindnet-416400"
	I1213 10:18:51.933079    6436 host.go:66] Checking if "kindnet-416400" exists ...
	I1213 10:18:51.933079    6436 config.go:182] Loaded profile config "kindnet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:18:51.933079    6436 addons.go:70] Setting default-storageclass=true in profile "kindnet-416400"
	I1213 10:18:51.933079    6436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-416400"
	I1213 10:18:51.935162    6436 out.go:179] * Verifying Kubernetes components...
	I1213 10:18:51.944769    6436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:18:51.945446    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:51.945446    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:52.003589    6436 addons.go:239] Setting addon default-storageclass=true in "kindnet-416400"
	I1213 10:18:52.003589    6436 host.go:66] Checking if "kindnet-416400" exists ...
	I1213 10:18:52.007602    6436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:18:52.011601    6436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:18:52.011601    6436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:18:52.012619    6436 cli_runner.go:164] Run: docker container inspect kindnet-416400 --format={{.State.Status}}
	I1213 10:18:52.015600    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:52.068594    6436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:18:52.068594    6436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:18:52.070591    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:52.071615    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:52.130053    6436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53379 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-416400\id_rsa Username:docker}
	I1213 10:18:52.312863    6436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:18:52.437410    6436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:18:52.443404    6436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:18:52.617151    6436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:18:53.116904    6436 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:18:53.623180    6436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-416400" context rescaled to 1 replicas
	I1213 10:18:53.629524    6436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1920975s)
	I1213 10:18:53.629524    6436 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1861036s)
	I1213 10:18:53.629524    6436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.0123588s)
	I1213 10:18:53.633325    6436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-416400
	I1213 10:18:53.645242    6436 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:18:53.648407    6436 addons.go:530] duration metric: took 1.7154525s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:18:53.685401    6436 node_ready.go:35] waiting up to 15m0s for node "kindnet-416400" to be "Ready" ...
	W1213 10:18:55.711871    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:18:58.190992    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:00.321421    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:02.691668    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:05.191930    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:07.691636    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:09.692407    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:12.192032    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	W1213 10:19:14.193193    6436 node_ready.go:57] node "kindnet-416400" has "Ready":"False" status (will retry)
	I1213 10:19:14.691136    6436 node_ready.go:49] node "kindnet-416400" is "Ready"
	I1213 10:19:14.691136    6436 node_ready.go:38] duration metric: took 21.0054376s for node "kindnet-416400" to be "Ready" ...
	I1213 10:19:14.691136    6436 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:19:14.695222    6436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:19:14.716134    6436 api_server.go:72] duration metric: took 22.7834074s to wait for apiserver process to appear ...
	I1213 10:19:14.716208    6436 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:19:14.716244    6436 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53378/healthz ...
	I1213 10:19:14.727121    6436 api_server.go:279] https://127.0.0.1:53378/healthz returned 200:
	ok
	I1213 10:19:14.729476    6436 api_server.go:141] control plane version: v1.34.2
	I1213 10:19:14.729476    6436 api_server.go:131] duration metric: took 13.2673ms to wait for apiserver health ...
	I1213 10:19:14.729476    6436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:19:14.739692    6436 system_pods.go:59] 8 kube-system pods found
	I1213 10:19:14.739726    6436 system_pods.go:61] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:19:14.739762    6436 system_pods.go:61] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:14.739762    6436 system_pods.go:61] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:14.739762    6436 system_pods.go:61] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:14.739762    6436 system_pods.go:61] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:14.739762    6436 system_pods.go:61] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:14.739880    6436 system_pods.go:61] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:14.739880    6436 system_pods.go:61] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:19:14.739880    6436 system_pods.go:74] duration metric: took 10.4037ms to wait for pod list to return data ...
	I1213 10:19:14.739880    6436 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:19:14.743875    6436 default_sa.go:45] found service account: "default"
	I1213 10:19:14.743875    6436 default_sa.go:55] duration metric: took 3.995ms for default service account to be created ...
	I1213 10:19:14.743875    6436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:19:14.749264    6436 system_pods.go:86] 8 kube-system pods found
	I1213 10:19:14.749264    6436 system_pods.go:89] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:19:14.749264    6436 system_pods.go:89] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:14.749264    6436 system_pods.go:89] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:19:14.749264    6436 retry.go:31] will retry after 269.125545ms: missing components: kube-dns
	I1213 10:19:15.026062    6436 system_pods.go:86] 8 kube-system pods found
	I1213 10:19:15.026062    6436 system_pods.go:89] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:19:15.026062    6436 system_pods.go:89] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:15.026062    6436 system_pods.go:89] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:19:15.026062    6436 retry.go:31] will retry after 373.227785ms: missing components: kube-dns
	I1213 10:19:15.412301    6436 system_pods.go:86] 8 kube-system pods found
	I1213 10:19:15.412841    6436 system_pods.go:89] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:19:15.412841    6436 system_pods.go:89] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:15.412841    6436 system_pods.go:89] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:19:15.412841    6436 retry.go:31] will retry after 391.924742ms: missing components: kube-dns
	I1213 10:19:15.812849    6436 system_pods.go:86] 8 kube-system pods found
	I1213 10:19:15.812960    6436 system_pods.go:89] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:19:15.812960    6436 system_pods.go:89] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:15.812960    6436 system_pods.go:89] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:15.812993    6436 system_pods.go:89] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:15.812993    6436 system_pods.go:89] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:15.812993    6436 system_pods.go:89] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:15.812993    6436 system_pods.go:89] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:15.812993    6436 system_pods.go:89] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:19:15.813075    6436 retry.go:31] will retry after 479.414477ms: missing components: kube-dns
	I1213 10:19:16.307221    6436 system_pods.go:86] 8 kube-system pods found
	I1213 10:19:16.307221    6436 system_pods.go:89] "coredns-66bc5c9577-zdv29" [f8313904-f538-42a6-85dd-e01a912078f9] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "etcd-kindnet-416400" [1da5e2e3-4ed4-408c-ae3c-654377f7c53e] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "kube-apiserver-kindnet-416400" [2433e1d6-be45-4130-913b-ab251e7a1c01] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "kube-controller-manager-kindnet-416400" [08e2b201-2707-4417-a451-7f810ad87a99] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "kube-proxy-w7hxj" [82435944-c64f-4bbd-8f31-214297669bec] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "kube-scheduler-kindnet-416400" [3eb135d9-6772-4ebd-98bf-92b1f2b72e1d] Running
	I1213 10:19:16.307221    6436 system_pods.go:89] "storage-provisioner" [fd592dfc-6e6d-4dac-b375-c24f439a12d5] Running
	I1213 10:19:16.307221    6436 system_pods.go:126] duration metric: took 1.5633245s to wait for k8s-apps to be running ...
	I1213 10:19:16.307221    6436 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:19:16.311635    6436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:19:16.335961    6436 system_svc.go:56] duration metric: took 28.7391ms WaitForService to wait for kubelet
	I1213 10:19:16.335961    6436 kubeadm.go:587] duration metric: took 24.4032111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:19:16.335961    6436 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:19:16.342023    6436 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:19:16.342023    6436 node_conditions.go:123] node cpu capacity is 16
	I1213 10:19:16.342023    6436 node_conditions.go:105] duration metric: took 6.0615ms to run NodePressure ...
	I1213 10:19:16.342023    6436 start.go:242] waiting for startup goroutines ...
	I1213 10:19:16.342023    6436 start.go:247] waiting for cluster config update ...
	I1213 10:19:16.342023    6436 start.go:256] writing updated cluster config ...
	I1213 10:19:16.347063    6436 ssh_runner.go:195] Run: rm -f paused
	I1213 10:19:16.354228    6436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:19:16.359571    6436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zdv29" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.368367    6436 pod_ready.go:94] pod "coredns-66bc5c9577-zdv29" is "Ready"
	I1213 10:19:16.368367    6436 pod_ready.go:86] duration metric: took 8.796ms for pod "coredns-66bc5c9577-zdv29" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.372785    6436 pod_ready.go:83] waiting for pod "etcd-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.380352    6436 pod_ready.go:94] pod "etcd-kindnet-416400" is "Ready"
	I1213 10:19:16.380352    6436 pod_ready.go:86] duration metric: took 7.5674ms for pod "etcd-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.384407    6436 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.392802    6436 pod_ready.go:94] pod "kube-apiserver-kindnet-416400" is "Ready"
	I1213 10:19:16.392802    6436 pod_ready.go:86] duration metric: took 8.3955ms for pod "kube-apiserver-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.396783    6436 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.761805    6436 pod_ready.go:94] pod "kube-controller-manager-kindnet-416400" is "Ready"
	I1213 10:19:16.761893    6436 pod_ready.go:86] duration metric: took 365.1038ms for pod "kube-controller-manager-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:16.961157    6436 pod_ready.go:83] waiting for pod "kube-proxy-w7hxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:17.362658    6436 pod_ready.go:94] pod "kube-proxy-w7hxj" is "Ready"
	I1213 10:19:17.362760    6436 pod_ready.go:86] duration metric: took 401.5981ms for pod "kube-proxy-w7hxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:17.561639    6436 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:17.960069    6436 pod_ready.go:94] pod "kube-scheduler-kindnet-416400" is "Ready"
	I1213 10:19:17.960069    6436 pod_ready.go:86] duration metric: took 398.3151ms for pod "kube-scheduler-kindnet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:19:17.960069    6436 pod_ready.go:40] duration metric: took 1.6057352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:19:18.056260    6436 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 10:19:18.060597    6436 out.go:179] * Done! kubectl is now configured to use "kindnet-416400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979577017Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979670526Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979683227Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979688528Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979693828Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979719131Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:09:34 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:34.979754934Z" level=info msg="Initializing buildkit"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.145509829Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154410655Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154649477Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154687681Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 dockerd[1172]: time="2025-12-13T10:09:35.154696782Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:09:35 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:09:35 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:35Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:09:36 no-preload-803600 cri-dockerd[1467]: time="2025-12-13T10:09:36Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:09:36 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:19:41.858570   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:41.859825   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:41.861666   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:41.863136   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:19:41.863998   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +6.696318] CPU: 6 PID: 408229 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fdfdb9f3b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfdb9f3af6.
	[  +0.000001] RSP: 002b:00007ffef22200d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.844923] CPU: 3 PID: 408378 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fe6c2ac2b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe6c2ac2af6.
	[  +0.000001] RSP: 002b:00007fff26688df0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +9.183686] tmpfs: Unknown parameter 'noswap'
	[  +9.061869] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:19:41 up  1:55,  0 user,  load average: 3.07, 3.14, 3.25
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:19:38 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:38 no-preload-803600 kubelet[13115]: E1213 10:19:38.958501   13115 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:38 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:38 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:39 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 446.
	Dec 13 10:19:39 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:39 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:39 no-preload-803600 kubelet[13129]: E1213 10:19:39.703125   13129 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:39 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:39 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:40 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 447.
	Dec 13 10:19:40 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:40 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:40 no-preload-803600 kubelet[13155]: E1213 10:19:40.453277   13155 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:40 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:40 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 448.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:41 no-preload-803600 kubelet[13183]: E1213 10:19:41.222440   13183 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:19:41 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:19:41 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:19:41 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 6 (590.655ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:19:42.822153    6812 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (88.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (378.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1213 10:19:46.057740    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m14.9788092s)

                                                
                                                
-- stdout --
	* [no-preload-803600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-803600" primary control-plane node in "no-preload-803600" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:19:45.348646    8468 out.go:360] Setting OutFile to fd 1724 ...
	I1213 10:19:45.394569    8468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:19:45.394569    8468 out.go:374] Setting ErrFile to fd 1208...
	I1213 10:19:45.394972    8468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:19:45.408028    8468 out.go:368] Setting JSON to false
	I1213 10:19:45.411496    8468 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6992,"bootTime":1765614192,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:19:45.411652    8468 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:19:45.414897    8468 out.go:179] * [no-preload-803600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:19:45.417165    8468 notify.go:221] Checking for updates...
	I1213 10:19:45.419300    8468 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:19:45.421305    8468 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:19:45.423304    8468 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:19:45.425291    8468 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:19:45.428296    8468 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:19:45.430295    8468 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:19:45.431306    8468 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:19:45.544259    8468 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:19:45.547412    8468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:19:45.799291    8468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:19:45.779035639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:19:45.803297    8468 out.go:179] * Using the docker driver based on existing profile
	I1213 10:19:45.805294    8468 start.go:309] selected driver: docker
	I1213 10:19:45.805294    8468 start.go:927] validating driver "docker" against &{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:19:45.805294    8468 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:19:45.896305    8468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:19:46.151757    8468 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:19:46.132203366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:19:46.151757    8468 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:19:46.152419    8468 cni.go:84] Creating CNI manager for ""
	I1213 10:19:46.152419    8468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:19:46.152419    8468 start.go:353] cluster config:
	{Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:19:46.156414    8468 out.go:179] * Starting "no-preload-803600" primary control-plane node in "no-preload-803600" cluster
	I1213 10:19:46.158422    8468 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:19:46.160414    8468 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:19:46.162414    8468 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:19:46.162414    8468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:19:46.162414    8468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1213 10:19:46.163417    8468 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1213 10:19:46.360109    8468 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:19:46.360109    8468 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:19:46.360109    8468 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:19:46.360109    8468 start.go:360] acquireMachinesLock for no-preload-803600: {Name:mkcf862c61e4405506d111940ccf3455664885da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:46.360109    8468 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-803600"
	I1213 10:19:46.360109    8468 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:19:46.360109    8468 fix.go:54] fixHost starting: 
	I1213 10:19:46.378111    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:46.528215    8468 fix.go:112] recreateIfNeeded on no-preload-803600: state=Stopped err=<nil>
	W1213 10:19:46.528215    8468 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:19:46.540200    8468 out.go:252] * Restarting existing docker container for "no-preload-803600" ...
	I1213 10:19:46.544189    8468 cli_runner.go:164] Run: docker start no-preload-803600
	I1213 10:19:47.909943    8468 cli_runner.go:217] Completed: docker start no-preload-803600: (1.3657343s)
	I1213 10:19:47.920065    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:48.200892    8468 kic.go:430] container "no-preload-803600" state is running.
	I1213 10:19:48.208912    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:48.341576    8468 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\config.json ...
	I1213 10:19:48.344507    8468 machine.go:94] provisionDockerMachine start ...
	I1213 10:19:48.352044    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:48.458965    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:48.459954    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:48.459954    8468 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:19:48.472941    8468 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:19:49.536065    8468 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.536065    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:19:49.536065    8468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.3725997s
	I1213 10:19:49.536065    8468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:19:49.580065    8468 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.580065    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1213 10:19:49.580065    8468 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.416599s
	I1213 10:19:49.580065    8468 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1213 10:19:49.591058    8468 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.592055    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1213 10:19:49.592055    8468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.4285888s
	I1213 10:19:49.592055    8468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1213 10:19:49.592055    8468 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.592055    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1213 10:19:49.592055    8468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.4285888s
	I1213 10:19:49.592055    8468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1213 10:19:49.593059    8468 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.593059    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:19:49.594068    8468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.4306014s
	I1213 10:19:49.594068    8468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:19:49.597052    8468 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.597052    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1213 10:19:49.597052    8468 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.4335857s
	I1213 10:19:49.597052    8468 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1213 10:19:49.642066    8468 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.642066    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1213 10:19:49.642066    8468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.4785984s
	I1213 10:19:49.643055    8468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:19:49.655068    8468 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:19:49.655068    8468 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:19:49.656079    8468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.4926116s
	I1213 10:19:49.656079    8468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:19:49.656079    8468 cache.go:87] Successfully saved all images to host disk.
	I1213 10:19:51.655610    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:19:51.655610    8468 ubuntu.go:182] provisioning hostname "no-preload-803600"
	I1213 10:19:51.659825    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:51.725462    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:51.726463    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:51.726463    8468 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-803600 && echo "no-preload-803600" | sudo tee /etc/hostname
	I1213 10:19:51.930842    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-803600
	
	I1213 10:19:51.935838    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:51.999055    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:51.999055    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:51.999055    8468 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-803600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-803600/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-803600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:19:52.193801    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:19:52.193801    8468 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:19:52.193801    8468 ubuntu.go:190] setting up certificates
	I1213 10:19:52.193801    8468 provision.go:84] configureAuth start
	I1213 10:19:52.198089    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:52.260779    8468 provision.go:143] copyHostCerts
	I1213 10:19:52.261366    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:19:52.261366    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:19:52.261366    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:19:52.262748    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:19:52.262777    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:19:52.262997    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:19:52.264143    8468 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:19:52.264193    8468 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:19:52.264534    8468 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:19:52.265279    8468 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-803600 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-803600]
	I1213 10:19:52.298901    8468 provision.go:177] copyRemoteCerts
	I1213 10:19:52.302899    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:19:52.305898    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.362435    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:52.510179    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:19:52.543440    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:19:52.578143    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:19:52.617233    8468 provision.go:87] duration metric: took 423.4266ms to configureAuth
	I1213 10:19:52.617785    8468 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:19:52.618358    8468 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:19:52.623872    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.686925    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:52.687445    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:52.687480    8468 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:19:52.872375    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:19:52.872375    8468 ubuntu.go:71] root file system type: overlay
	I1213 10:19:52.872928    8468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:19:52.876824    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:52.937002    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:52.937976    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:52.938076    8468 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:19:53.158443    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:19:53.163633    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.227440    8468 main.go:143] libmachine: Using SSH client type: native
	I1213 10:19:53.228437    8468 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53489 <nil> <nil>}
	I1213 10:19:53.228437    8468 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:19:53.418226    8468 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:19:53.418226    8468 machine.go:97] duration metric: took 5.0736468s to provisionDockerMachine
	I1213 10:19:53.418226    8468 start.go:293] postStartSetup for "no-preload-803600" (driver="docker")
	I1213 10:19:53.418226    8468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:19:53.423070    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:19:53.425911    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.481934    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:53.613891    8468 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:19:53.621539    8468 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:19:53.621539    8468 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:19:53.621539    8468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:19:53.622540    8468 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:19:53.622540    8468 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:19:53.627908    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:19:53.644674    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:19:53.683710    8468 start.go:296] duration metric: took 265.4807ms for postStartSetup
	I1213 10:19:53.688461    8468 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:19:53.692588    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.748028    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:53.885106    8468 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:19:53.894800    8468 fix.go:56] duration metric: took 7.5345823s for fixHost
	I1213 10:19:53.894800    8468 start.go:83] releasing machines lock for "no-preload-803600", held for 7.5345823s
	I1213 10:19:53.899723    8468 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-803600
	I1213 10:19:53.963324    8468 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:19:53.968697    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:53.969613    8468 ssh_runner.go:195] Run: cat /version.json
	I1213 10:19:53.975962    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:54.036036    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:54.036036    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	W1213 10:19:54.161183    8468 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:19:54.188871    8468 ssh_runner.go:195] Run: systemctl --version
	I1213 10:19:54.207422    8468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:19:54.219936    8468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:19:54.224766    8468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:19:54.239474    8468 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:19:54.239474    8468 start.go:496] detecting cgroup driver to use...
	I1213 10:19:54.239474    8468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:19:54.239474    8468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:19:54.265468    8468 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:19:54.265468    8468 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:19:54.265468    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:19:54.291479    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:19:54.310876    8468 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:19:54.315529    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:19:54.336055    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:19:54.361908    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:19:54.388301    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:19:54.409976    8468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:19:54.431219    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:19:54.453984    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:19:54.475694    8468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:19:54.494481    8468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:19:54.510809    8468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:19:54.529069    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:54.685256    8468 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:19:54.852460    8468 start.go:496] detecting cgroup driver to use...
	I1213 10:19:54.852460    8468 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:19:54.859898    8468 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:19:54.893855    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:19:54.918575    8468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:19:54.992684    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:19:55.022485    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:19:55.041497    8468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:19:55.072168    8468 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:19:55.086598    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:19:55.099071    8468 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:19:55.125793    8468 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:19:55.290490    8468 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:19:55.470123    8468 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:19:55.470123    8468 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:19:55.495115    8468 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:19:55.516116    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:55.678804    8468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:19:56.651196    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:19:56.673194    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:19:56.696194    8468 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 10:19:56.720195    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:19:56.742198    8468 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:19:56.894197    8468 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:19:57.071576    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:57.242586    8468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:19:57.270585    8468 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:19:57.293575    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:57.460207    8468 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:19:57.582812    8468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:19:57.601837    8468 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:19:57.606825    8468 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:19:57.614835    8468 start.go:564] Will wait 60s for crictl version
	I1213 10:19:57.618835    8468 ssh_runner.go:195] Run: which crictl
	I1213 10:19:57.629815    8468 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:19:57.676812    8468 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:19:57.679825    8468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:19:57.727828    8468 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:19:57.774825    8468 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 10:19:57.777830    8468 cli_runner.go:164] Run: docker exec -t no-preload-803600 dig +short host.docker.internal
	I1213 10:19:57.906058    8468 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:19:57.911053    8468 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:19:57.918056    8468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:19:57.936058    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:57.993058    8468 kubeadm.go:884] updating cluster {Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:19:57.993058    8468 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:19:57.997057    8468 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:19:58.041072    8468 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:19:58.041072    8468 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:19:58.041072    8468 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 docker true true} ...
	I1213 10:19:58.041072    8468 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-803600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:19:58.045057    8468 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:19:58.129059    8468 cni.go:84] Creating CNI manager for ""
	I1213 10:19:58.129059    8468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:19:58.129059    8468 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:19:58.129059    8468 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-803600 NodeName:no-preload-803600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:19:58.129059    8468 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-803600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:19:58.136730    8468 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:19:58.164214    8468 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:19:58.169591    8468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:19:58.190279    8468 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1213 10:19:58.210281    8468 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:19:58.231278    8468 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1213 10:19:58.255299    8468 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:19:58.262280    8468 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:19:58.282275    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:58.450327    8468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:19:58.474330    8468 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600 for IP: 192.168.103.2
	I1213 10:19:58.474330    8468 certs.go:195] generating shared ca certs ...
	I1213 10:19:58.474330    8468 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:19:58.475324    8468 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:19:58.475324    8468 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:19:58.475324    8468 certs.go:257] generating profile certs ...
	I1213 10:19:58.476316    8468 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\client.key
	I1213 10:19:58.476316    8468 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key.e3e76275
	I1213 10:19:58.476316    8468 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key
	I1213 10:19:58.477316    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:19:58.477316    8468 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:19:58.477316    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:19:58.477316    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:19:58.477316    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:19:58.478318    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:19:58.478318    8468 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:19:58.479315    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:19:58.508640    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:19:58.548552    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:19:58.581866    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:19:58.609853    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:19:58.637842    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:19:58.664838    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:19:58.690840    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-803600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:19:58.716841    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:19:58.744844    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:19:58.773841    8468 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:19:58.801847    8468 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:19:58.827837    8468 ssh_runner.go:195] Run: openssl version
	I1213 10:19:58.840843    8468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:19:58.856838    8468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:19:58.875846    8468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:19:58.881853    8468 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:19:58.886845    8468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:19:58.936844    8468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:19:58.953841    8468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:19:58.970843    8468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:19:58.993844    8468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:19:59.002846    8468 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:19:59.008843    8468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:19:59.060844    8468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:19:59.078853    8468 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:19:59.095845    8468 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:19:59.117856    8468 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:19:59.127875    8468 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:19:59.134853    8468 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:19:59.199855    8468 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:19:59.218849    8468 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:19:59.229862    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:19:59.284846    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:19:59.338867    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:19:59.391862    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:19:59.449852    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:19:59.498848    8468 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:19:59.548312    8468 kubeadm.go:401] StartCluster: {Name:no-preload-803600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-803600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:19:59.552312    8468 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:19:59.586306    8468 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:19:59.598310    8468 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:19:59.598310    8468 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:19:59.603315    8468 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:19:59.616313    8468 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:19:59.619322    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:59.672306    8468 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-803600" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:19:59.673316    8468 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-803600" cluster setting kubeconfig missing "no-preload-803600" context setting]
	I1213 10:19:59.673316    8468 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:19:59.695315    8468 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:19:59.708317    8468 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 10:19:59.708317    8468 kubeadm.go:602] duration metric: took 110.0061ms to restartPrimaryControlPlane
	I1213 10:19:59.708317    8468 kubeadm.go:403] duration metric: took 160.0026ms to StartCluster
	I1213 10:19:59.708317    8468 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:19:59.708317    8468 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:19:59.710317    8468 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:19:59.711310    8468 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:19:59.711310    8468 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:19:59.711310    8468 addons.go:70] Setting storage-provisioner=true in profile "no-preload-803600"
	I1213 10:19:59.711310    8468 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:19:59.711310    8468 addons.go:239] Setting addon storage-provisioner=true in "no-preload-803600"
	I1213 10:19:59.711310    8468 addons.go:70] Setting dashboard=true in profile "no-preload-803600"
	I1213 10:19:59.711310    8468 addons.go:70] Setting default-storageclass=true in profile "no-preload-803600"
	I1213 10:19:59.711310    8468 addons.go:239] Setting addon dashboard=true in "no-preload-803600"
	I1213 10:19:59.711310    8468 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-803600"
	W1213 10:19:59.711310    8468 addons.go:248] addon dashboard should already be in state true
	I1213 10:19:59.711310    8468 host.go:66] Checking if "no-preload-803600" exists ...
	I1213 10:19:59.711310    8468 host.go:66] Checking if "no-preload-803600" exists ...
	I1213 10:19:59.713307    8468 out.go:179] * Verifying Kubernetes components...
	I1213 10:19:59.720313    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:59.721308    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:59.722332    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:59.722332    8468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:19:59.779316    8468 addons.go:239] Setting addon default-storageclass=true in "no-preload-803600"
	I1213 10:19:59.779316    8468 host.go:66] Checking if "no-preload-803600" exists ...
	I1213 10:19:59.780311    8468 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:19:59.780311    8468 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:19:59.784317    8468 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:19:59.784317    8468 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:19:59.785315    8468 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:19:59.787317    8468 cli_runner.go:164] Run: docker container inspect no-preload-803600 --format={{.State.Status}}
	I1213 10:19:59.787317    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:19:59.787317    8468 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:19:59.788324    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:59.792327    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:59.858928    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:59.860927    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:19:59.876934    8468 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:19:59.876934    8468 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:19:59.880921    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:19:59.934936    8468 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:19:59.944946    8468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53489 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-803600\id_rsa Username:docker}
	I1213 10:20:00.014929    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:20:00.019925    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:20:00.019925    8468 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:20:00.043936    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:20:00.043936    8468 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:20:00.062924    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:20:00.062924    8468 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:20:00.084948    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:20:00.084948    8468 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:20:00.107939    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:00.116936    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.116936    8468 retry.go:31] will retry after 144.167955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.118928    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:20:00.118928    8468 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:20:00.120935    8468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-803600
	I1213 10:20:00.141946    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:20:00.141946    8468 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:20:00.174939    8468 node_ready.go:35] waiting up to 6m0s for node "no-preload-803600" to be "Ready" ...
	I1213 10:20:00.210929    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:20:00.210929    8468 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:20:00.233928    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:20:00.233928    8468 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:20:00.265926    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:00.306941    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.306941    8468 retry.go:31] will retry after 304.215433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.308932    8468 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:20:00.308932    8468 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:20:00.334936    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:00.412659    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.412659    8468 retry.go:31] will retry after 248.011222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:00.452473    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.452473    8468 retry.go:31] will retry after 193.016412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.615883    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:20:00.650420    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:20:00.667437    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:00.723038    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.723038    8468 retry.go:31] will retry after 274.806255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:00.742030    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.742030    8468 retry.go:31] will retry after 472.531153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:00.749032    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:00.749032    8468 retry.go:31] will retry after 579.1039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.005644    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:01.105042    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.105117    8468 retry.go:31] will retry after 356.643726ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.219557    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:01.326343    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.326425    8468 retry.go:31] will retry after 527.387848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.332984    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:01.442913    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.442913    8468 retry.go:31] will retry after 618.757524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.467705    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:01.559983    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.559983    8468 retry.go:31] will retry after 517.794902ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.859077    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:01.961238    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:01.961344    8468 retry.go:31] will retry after 951.479311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:02.065958    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:20:02.083213    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:02.202581    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:02.202581    8468 retry.go:31] will retry after 1.626406937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:02.205376    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:02.205376    8468 retry.go:31] will retry after 1.356627859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:02.916911    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:03.013538    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:03.013623    8468 retry.go:31] will retry after 1.009060071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:03.567486    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:03.669545    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:03.669615    8468 retry.go:31] will retry after 2.730145514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:03.833890    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:03.933369    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:03.933369    8468 retry.go:31] will retry after 953.74116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:04.031203    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:04.123922    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:04.123922    8468 retry.go:31] will retry after 1.028826334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:04.892193    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:05.009755    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:05.009755    8468 retry.go:31] will retry after 3.987673438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:05.159224    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:05.253591    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:05.253591    8468 retry.go:31] will retry after 2.748602401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:06.405791    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:06.525149    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:06.525253    8468 retry.go:31] will retry after 2.08678072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:08.005957    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:08.108449    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:08.108449    8468 retry.go:31] will retry after 5.341747858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:08.617243    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:08.726909    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:08.726909    8468 retry.go:31] will retry after 2.851562768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:09.002464    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:09.102476    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:09.102476    8468 retry.go:31] will retry after 5.884602128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:10.203458    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:11.583434    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:11.714554    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:11.714554    8468 retry.go:31] will retry after 7.485124041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:13.455547    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:13.541922    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:13.542020    8468 retry.go:31] will retry after 8.135198811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:14.992257    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:15.096205    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:15.096205    8468 retry.go:31] will retry after 7.728239711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:19.204769    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:19.294020    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:19.294020    8468 retry.go:31] will retry after 11.049523391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:20.240166    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:21.682179    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:21.775384    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:21.775384    8468 retry.go:31] will retry after 13.246201189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:22.830168    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:22.931135    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:22.931135    8468 retry.go:31] will retry after 8.142484329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:30.279416    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:30.349244    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:30.463245    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:30.463245    8468 retry.go:31] will retry after 10.83926478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:31.079937    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:31.245972    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:31.245972    8468 retry.go:31] will retry after 20.02853134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:35.025488    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:35.127988    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:35.128994    8468 retry.go:31] will retry after 15.628436435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:40.316441    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:41.307852    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:41.394384    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:41.394470    8468 retry.go:31] will retry after 13.877355955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:20:50.349344    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:50.762129    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:50.866259    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:50.866874    8468 retry.go:31] will retry after 28.312317849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:51.279935    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:51.378474    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:51.378474    8468 retry.go:31] will retry after 11.232412768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:55.277503    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:55.359684    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:55.359799    8468 retry.go:31] will retry after 25.131057199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:00.383691    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:02.617356    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:21:02.697259    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:02.697259    8468 retry.go:31] will retry after 17.55495334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:10.425009    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:19.184086    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:21:19.276360    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:19.276360    8468 retry.go:31] will retry after 37.24462991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:20.257313    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:21:20.340817    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:20.340817    8468 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:21:20.463313    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:20.495524    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:21:20.586987    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:20.586987    8468 retry.go:31] will retry after 24.857879765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:30.494436    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:21:40.530803    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:45.449630    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:21:45.554635    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:45.554635    8468 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:21:50.566639    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:56.526618    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:21:56.624189    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:56.624728    8468 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:21:56.630182    8468 out.go:179] * Enabled addons: 
	I1213 10:21:56.634860    8468 addons.go:530] duration metric: took 1m56.9218745s for enable addons: enabled=[]
	W1213 10:22:00.599408    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:22:10.635379    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:22:20.671031    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:22:30.704322    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:22:40.739981    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:22:50.775119    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:00.807805    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:10.841439    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:20.876939    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:30.911231    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:40.946118    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:23:50.979475    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:01.020284    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:11.051372    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:21.089064    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:31.126405    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:41.162971    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:24:51.194340    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:01.228430    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:11.260948    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:21.290289    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:31.326898    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:41.363684    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:25:51.396848    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:26:00.181301    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 10:26:00.181301    8468 node_ready.go:38] duration metric: took 6m0.0011586s for node "no-preload-803600" to be "Ready" ...
	I1213 10:26:00.184322    8468 out.go:203] 
	W1213 10:26:00.187317    8468 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:26:00.187317    8468 out.go:285] * 
	* 
	W1213 10:26:00.189310    8468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:26:00.192302    8468 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-803600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 410406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:19:47.312495248Z",
	            "FinishedAt": "2025-12-13T10:19:43.959791267Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "202edcc07e78147ef811fd01911ae5ff35d0d9d006f45e69c81f5303ddbf73f3",
	            "SandboxKey": "/var/run/docker/netns/202edcc07e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53491"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "5315c65ac1c1a0593e57f42a5908d620f4852bb681cd15a9c6018ed864a9d80f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 2 (657.9335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.2916246s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                     │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl status kubelet --all --full --no-pager                           │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl cat kubelet --no-pager                                           │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo journalctl -xeu kubelet --all --full --no-pager                            │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /etc/kubernetes/kubelet.conf                                           │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /var/lib/kubelet/config.yaml                                           │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl status docker --all --full --no-pager                            │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl cat docker --no-pager                                            │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /etc/docker/daemon.json                                                │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo docker system info                                                         │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl status cri-docker --all --full --no-pager                        │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl cat cri-docker --no-pager                                        │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                   │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                             │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cri-dockerd --version                                                      │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl status containerd --all --full --no-pager                        │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl cat containerd --no-pager                                        │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /lib/systemd/system/containerd.service                                 │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo cat /etc/containerd/config.toml                                            │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo containerd config dump                                                     │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl status crio --all --full --no-pager                              │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ ssh     │ -p enable-default-cni-416400 sudo systemctl cat crio --no-pager                                              │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                    │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ ssh     │ -p enable-default-cni-416400 sudo crio config                                                                │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p enable-default-cni-416400                                                                                 │ enable-default-cni-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -p bridge-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker │ bridge-416400             │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:23
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:23.215787   12024 out.go:360] Setting OutFile to fd 1796 ...
	I1213 10:25:23.260523   12024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:23.260523   12024 out.go:374] Setting ErrFile to fd 1240...
	I1213 10:25:23.260523   12024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:23.278465   12024 out.go:368] Setting JSON to false
	I1213 10:25:23.282285   12024 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7330,"bootTime":1765614192,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:25:23.282341   12024 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:25:23.285417   12024 out.go:179] * [bridge-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:25:23.290966   12024 notify.go:221] Checking for updates...
	I1213 10:25:23.296226   12024 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:25:23.301761   12024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:25:23.304711   12024 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:25:23.307687   12024 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:25:23.310702   12024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:23.321704   12024 config.go:182] Loaded profile config "flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:25:23.322705   12024 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:25:23.322705   12024 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:25:23.322705   12024 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:23.441906   12024 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:25:23.444905   12024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:23.694837   12024 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:25:23.673203181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:25:23.698838   12024 out.go:179] * Using the docker driver based on user configuration
	I1213 10:25:23.701867   12024 start.go:309] selected driver: docker
	I1213 10:25:23.701867   12024 start.go:927] validating driver "docker" against <nil>
	I1213 10:25:23.701867   12024 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:25:23.753854   12024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:24.004845   12024 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:25:23.984834661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:25:24.004845   12024 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:25:24.005848   12024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:25:24.008851   12024 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:25:24.010855   12024 cni.go:84] Creating CNI manager for "bridge"
	I1213 10:25:24.010855   12024 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 10:25:24.010855   12024 start.go:353] cluster config:
	{Name:bridge-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:24.013846   12024 out.go:179] * Starting "bridge-416400" primary control-plane node in "bridge-416400" cluster
	I1213 10:25:24.016843   12024 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:25:24.018842   12024 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:25:24.021844   12024 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:25:24.021844   12024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:25:24.021844   12024 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:25:24.021844   12024 cache.go:65] Caching tarball of preloaded images
	I1213 10:25:24.021844   12024 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:25:24.021844   12024 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:25:24.022842   12024 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\config.json ...
	I1213 10:25:24.022842   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\config.json: {Name:mk86ceaeeef2e73cc970f527c795bde5f97b9562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:24.090854   12024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:25:24.090854   12024 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:25:24.091845   12024 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:25:24.091845   12024 start.go:360] acquireMachinesLock for bridge-416400: {Name:mk1d65137bcab0c19213c36568afc26eb05da3e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:25:24.091845   12024 start.go:364] duration metric: took 0s to acquireMachinesLock for "bridge-416400"
	I1213 10:25:24.091845   12024 start.go:93] Provisioning new machine with config: &{Name:bridge-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-416400 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:25:24.091845   12024 start.go:125] createHost starting for "" (driver="docker")
	W1213 10:25:21.290289    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:25:23.263754    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:23.302408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:23.337266    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.337266    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:23.340260    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:23.370276    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.370276    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:23.375960    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:23.408917    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.408917    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:23.412904    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:23.441906    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.441906    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:23.445905    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:23.475913    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.475913    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:23.478907    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:23.537839    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.537839    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:23.543840    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:23.576844    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.576844    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:23.580844    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:23.614844    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.614844    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:23.614844    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:23.614844    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:23.687842    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:23.687842    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:23.729840    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:23.729840    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:23.842844    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:23.828686   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.830655   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.832409   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.835392   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.836394   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:23.828686   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.830655   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.832409   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.835392   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.836394   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:23.842844    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:23.842844    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:23.881854    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:23.881854    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:22.192515    5076 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:25:22.195582    5076 cli_runner.go:164] Run: docker exec -t flannel-416400 dig +short host.docker.internal
	I1213 10:25:22.330647    5076 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:25:22.335725    5076 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:25:22.344847    5076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:22.576487    5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" flannel-416400
	I1213 10:25:22.631790    5076 kubeadm.go:884] updating cluster {Name:flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:25:22.631790    5076 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:25:22.635954    5076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:25:22.672318    5076 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:25:22.672318    5076 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:25:22.675919    5076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:25:22.711671    5076 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:25:22.711774    5076 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:25:22.711774    5076 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:25:22.712052    5076 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1213 10:25:22.719869    5076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:25:22.793480    5076 cni.go:84] Creating CNI manager for "flannel"
	I1213 10:25:22.793480    5076 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:25:22.793480    5076 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-416400 NodeName:flannel-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:25:22.793480    5076 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "flannel-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:25:22.801277    5076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:25:22.821751    5076 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:25:22.828229    5076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:25:22.842070    5076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 10:25:22.861818    5076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:25:22.884636    5076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:25:22.913560    5076 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:25:22.920568    5076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:22.940684    5076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:23.100176    5076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:25:23.122741    5076 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400 for IP: 192.168.85.2
	I1213 10:25:23.122741    5076 certs.go:195] generating shared ca certs ...
	I1213 10:25:23.122741    5076 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.123505    5076 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:25:23.123505    5076 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:25:23.123505    5076 certs.go:257] generating profile certs ...
	I1213 10:25:23.124180    5076 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.key
	I1213 10:25:23.124180    5076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.crt with IP's: []
	I1213 10:25:23.274755    5076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.crt ...
	I1213 10:25:23.274755    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.crt: {Name:mk41434b15106ba79dfbda8627fd0cbce5e15d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.276034    5076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.key ...
	I1213 10:25:23.276034    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\client.key: {Name:mk28ad49226542f6b97de6692526cfe6ee358e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.277232    5076 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key.892f583e
	I1213 10:25:23.277399    5076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt.892f583e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:25:23.548834    5076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt.892f583e ...
	I1213 10:25:23.548834    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt.892f583e: {Name:mka1b9d499966fd20d2fe16fc5806a99fc6b2658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.549837    5076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key.892f583e ...
	I1213 10:25:23.549837    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key.892f583e: {Name:mk8c1069e6c9760311c6784e75df8283e7d2bf55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.550845    5076 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt.892f583e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt
	I1213 10:25:23.563848    5076 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key.892f583e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key
	I1213 10:25:23.564861    5076 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.key
	I1213 10:25:23.565843    5076 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.crt with IP's: []
	I1213 10:25:23.683843    5076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.crt ...
	I1213 10:25:23.683843    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.crt: {Name:mkc5c4d9cd845fb0e1f3b416bc541cb3ebd44b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.684838    5076 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.key ...
	I1213 10:25:23.684838    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.key: {Name:mk0017c9d7a5540304b330158f89a6e863e9c8f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:23.700852    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:25:23.700852    5076 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:25:23.701867    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:25:23.701867    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:25:23.701867    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:25:23.701867    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:25:23.702848    5076 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:25:23.703845    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:25:23.739842    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:25:23.773843    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:25:23.806852    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:25:23.850852    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:25:23.883852    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:25:23.912851    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:25:23.944855    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\flannel-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:25:23.972855    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:25:23.999851    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:25:24.026845    5076 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:25:24.053852    5076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:25:24.076862    5076 ssh_runner.go:195] Run: openssl version
	I1213 10:25:24.090854    5076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:25:24.107857    5076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:25:24.123860    5076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:25:24.130843    5076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:25:24.134844    5076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:25:24.181844    5076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:25:24.199703    5076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:25:24.218744    5076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:25:24.238070    5076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:25:24.258421    5076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:25:24.266104    5076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:25:24.272319    5076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:25:24.330762    5076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:25:24.349329    5076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:25:24.371448    5076 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:24.395215    5076 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:25:24.414102    5076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:24.421101    5076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:24.425089    5076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:24.473998    5076 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:25:24.491582    5076 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:25:24.510959    5076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:25:24.517536    5076 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:25:24.518537    5076 kubeadm.go:401] StartCluster: {Name:flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:24.521558    5076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:25:24.577919    5076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:25:24.598180    5076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:25:24.612184    5076 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:25:24.616187    5076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:25:24.629184    5076 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:25:24.629184    5076 kubeadm.go:158] found existing configuration files:
	
	I1213 10:25:24.633183    5076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:25:24.646192    5076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:25:24.650196    5076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:25:24.667188    5076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:25:24.679182    5076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:25:24.683181    5076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:25:24.699194    5076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:25:24.712192    5076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:25:24.718184    5076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:25:24.741188    5076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:25:24.757186    5076 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:25:24.761183    5076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:25:24.781725    5076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:25:24.958375    5076 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:25:24.961924    5076 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:25:25.055859    5076 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:25:24.094845   12024 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:25:24.094845   12024 start.go:159] libmachine.API.Create for "bridge-416400" (driver="docker")
	I1213 10:25:24.095848   12024 client.go:173] LocalClient.Create starting
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Decoding PEM data...
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Parsing certificate...
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Decoding PEM data...
	I1213 10:25:24.095848   12024 main.go:143] libmachine: Parsing certificate...
	I1213 10:25:24.099842   12024 cli_runner.go:164] Run: docker network inspect bridge-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:25:24.149843   12024 cli_runner.go:211] docker network inspect bridge-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:25:24.152852   12024 network_create.go:284] running [docker network inspect bridge-416400] to gather additional debugging logs...
	I1213 10:25:24.152852   12024 cli_runner.go:164] Run: docker network inspect bridge-416400
	W1213 10:25:24.197967   12024 cli_runner.go:211] docker network inspect bridge-416400 returned with exit code 1
	I1213 10:25:24.198033   12024 network_create.go:287] error running [docker network inspect bridge-416400]: docker network inspect bridge-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-416400 not found
	I1213 10:25:24.198066   12024 network_create.go:289] output of [docker network inspect bridge-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-416400 not found
	
	** /stderr **
	I1213 10:25:24.204592   12024 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:25:24.282691   12024 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:25:24.298509   12024 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:25:24.314547   12024 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:25:24.329580   12024 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:25:24.345254   12024 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:25:24.361927   12024 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000deede0}
	I1213 10:25:24.361927   12024 network_create.go:124] attempt to create docker network bridge-416400 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 10:25:24.365086   12024 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-416400 bridge-416400
	I1213 10:25:24.528438   12024 network_create.go:108] docker network bridge-416400 192.168.94.0/24 created
	I1213 10:25:24.528438   12024 kic.go:121] calculated static IP "192.168.94.2" for the "bridge-416400" container
	I1213 10:25:24.536169   12024 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:25:24.597182   12024 cli_runner.go:164] Run: docker volume create bridge-416400 --label name.minikube.sigs.k8s.io=bridge-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:25:24.656188   12024 oci.go:103] Successfully created a docker volume bridge-416400
	I1213 10:25:24.659190   12024 cli_runner.go:164] Run: docker run --rm --name bridge-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-416400 --entrypoint /usr/bin/test -v bridge-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:25:26.107598   12024 cli_runner.go:217] Completed: docker run --rm --name bridge-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-416400 --entrypoint /usr/bin/test -v bridge-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4483873s)
	I1213 10:25:26.107598   12024 oci.go:107] Successfully prepared a docker volume bridge-416400
	I1213 10:25:26.107598   12024 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:25:26.107598   12024 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:25:26.110603   12024 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:25:26.445770    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:26.468742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:26.497606    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.497606    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:26.504336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:26.535650    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.535650    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:26.539148    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:26.574002    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.574002    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:26.577576    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:26.608168    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.608168    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:26.612250    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:26.648664    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.648664    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:26.652642    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:26.695581    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.695581    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:26.701128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:26.736805    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.736805    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:26.741531    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:26.775202    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.775202    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:26.775202    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:26.775202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:26.837152    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:26.837152    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:26.907293    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:26.907293    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:26.944829    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:26.944829    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:27.035510    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:27.024018   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025149   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025873   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.028475   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.029437   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:27.024018   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025149   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025873   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.028475   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.029437   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:27.035510    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:27.035510    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:29.569253    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:29.595146    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:29.629968    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.629968    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:29.639031    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:29.680336    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.680336    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:29.683882    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:29.711575    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.711575    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:29.715562    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:29.748700    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.748700    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:29.754102    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:29.787482    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.787482    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:29.791566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:29.820415    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.820415    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:29.824718    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:29.855879    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.855879    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:29.861271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:29.894074    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.894074    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:29.894074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:29.894074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:29.959671    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:29.959671    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:30.002475    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:30.002475    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:30.082532    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:30.074335   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.075482   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.076694   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.078180   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.079305   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:30.074335   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.075482   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.076694   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.078180   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.079305   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:30.082532    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:30.082532    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:30.110237    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:30.110297    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:25:31.326898    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:25:32.671686    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:32.693105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:32.723439    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.723439    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:32.727792    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:32.756940    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.756940    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:32.761232    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:32.791456    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.791456    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:32.800403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:32.831611    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.831687    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:32.835616    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:32.865546    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.865546    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:32.869732    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:32.902223    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.902223    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:32.906561    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:32.940346    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.940346    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:32.944320    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:32.975469    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.975499    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:32.975499    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:32.975499    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:33.041207    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:33.041207    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:33.083590    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:33.083590    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:33.180935    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:33.168485   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.169714   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.171003   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.172674   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.174027   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:33.168485   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.169714   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.171003   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.172674   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.174027   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:33.180935    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:33.180935    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:33.210089    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:33.210152    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:35.768098    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:35.860177    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:35.889000    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.889000    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:35.896003    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:35.925040    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.925040    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:35.930032    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:35.958609    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.958609    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:35.962133    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:35.992299    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.992362    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:35.996377    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:36.026302    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.026302    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:36.029926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:36.059528    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.059528    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:36.063110    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:36.093396    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.093396    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:36.097255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:36.126154    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.126154    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:36.126154    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:36.126154    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:36.163586    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:36.164570    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:36.247461    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:36.234933   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.237658   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.238961   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.241820   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.243104   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:36.234933   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.237658   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.238961   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.241820   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.243104   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:36.247461    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:36.247461    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:36.274462    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:36.274462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:36.322858    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:36.322858    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:38.892110    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:38.918061    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:38.952306    5404 logs.go:282] 0 containers: []
	W1213 10:25:38.952306    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:38.956376    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:38.984034    5404 logs.go:282] 0 containers: []
	W1213 10:25:38.984034    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:38.988175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:39.018071    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.018071    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:39.022189    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:39.056215    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.056285    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:39.060000    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:39.089755    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.089755    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:39.093043    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:39.127383    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.127457    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:39.130982    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:39.159645    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.159645    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:39.163350    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:39.192096    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.192179    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:39.192179    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:39.192179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:39.223185    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:39.223313    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:39.274723    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:39.274723    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:39.340519    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:39.340519    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:39.383564    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:39.383564    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:39.468710    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:39.457661   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.458930   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.461566   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.463030   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.464232   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:39.457661   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.458930   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.461566   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.463030   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.464232   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:39.796867   12024 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (13.6860644s)
	I1213 10:25:39.796867   12024 kic.go:203] duration metric: took 13.6890687s to extract preloaded images to volume ...
	I1213 10:25:39.801837   12024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:40.095481   12024 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:25:40.026200957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:25:40.101006   12024 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:25:40.369275   12024 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-416400 --name bridge-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-416400 --network bridge-416400 --ip 192.168.94.2 --volume bridge-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:25:41.158687   12024 cli_runner.go:164] Run: docker container inspect bridge-416400 --format={{.State.Running}}
	I1213 10:25:41.221666   12024 cli_runner.go:164] Run: docker container inspect bridge-416400 --format={{.State.Status}}
	I1213 10:25:41.275699   12024 cli_runner.go:164] Run: docker exec bridge-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:25:41.394676   12024 oci.go:144] the created container "bridge-416400" has a running status.
	I1213 10:25:41.394676   12024 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa...
	I1213 10:25:41.626195   12024 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:25:41.712887   12024 cli_runner.go:164] Run: docker container inspect bridge-416400 --format={{.State.Status}}
	I1213 10:25:41.772878   12024 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:25:41.772878   12024 kic_runner.go:114] Args: [docker exec --privileged bridge-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:25:41.898450   12024 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa...
	W1213 10:25:41.363684    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:25:41.972444    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:42.004328    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:42.049826    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.049826    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:42.055261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:42.086954    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.086954    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:42.092499    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:42.135541    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.135541    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:42.137788    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:42.175806    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.175922    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:42.181557    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:42.217489    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.217489    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:42.222304    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:42.252826    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.252826    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:42.257410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:42.294098    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.294098    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:42.298088    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:42.329092    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.329092    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:42.329092    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:42.329092    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:42.390575    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:42.390575    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:42.474343    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:42.474343    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:42.524422    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:42.524422    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:42.618600    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:42.608954   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610024   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610906   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.613189   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.614616   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:42.608954   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610024   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610906   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.613189   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.614616   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:42.618600    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:42.618600    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:45.152316    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:45.174306    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:45.207310    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.207310    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:45.210309    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:45.238013    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.238013    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:45.241522    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:45.277528    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.277528    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:45.281057    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:45.310750    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.310750    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:45.314483    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:45.352031    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.352031    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:45.355035    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:45.386619    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.386619    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:45.390619    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:45.424279    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.424279    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:45.428270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:45.458271    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.458271    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:45.458271    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:45.458271    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:45.522619    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:45.522619    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:45.562726    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:45.562726    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:45.647172    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:45.636542   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.637634   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.638735   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.640327   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.642687   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:45.636542   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.637634   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.638735   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.640327   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.642687   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:45.647172    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:45.647172    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:45.685304    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:45.685304    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:44.203158   12024 cli_runner.go:164] Run: docker container inspect bridge-416400 --format={{.State.Status}}
	I1213 10:25:44.264024   12024 machine.go:94] provisionDockerMachine start ...
	I1213 10:25:44.269012   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:44.340685   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:44.353685   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:44.353685   12024 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:25:44.543230   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-416400
	
	I1213 10:25:44.543230   12024 ubuntu.go:182] provisioning hostname "bridge-416400"
	I1213 10:25:44.545785   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:44.607001   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:44.607481   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:44.607526   12024 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-416400 && echo "bridge-416400" | sudo tee /etc/hostname
	I1213 10:25:44.800735   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-416400
	
	I1213 10:25:44.804627   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:44.861431   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:44.862053   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:44.862053   12024 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:25:45.053297   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:25:45.053354   12024 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:25:45.053354   12024 ubuntu.go:190] setting up certificates
	I1213 10:25:45.053354   12024 provision.go:84] configureAuth start
	I1213 10:25:45.057711   12024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-416400
	I1213 10:25:45.112303   12024 provision.go:143] copyHostCerts
	I1213 10:25:45.113303   12024 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:25:45.113303   12024 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:25:45.113303   12024 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:25:45.114306   12024 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:25:45.114306   12024 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:25:45.114306   12024 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:25:45.115309   12024 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:25:45.115309   12024 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:25:45.115309   12024 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:25:45.116311   12024 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-416400 san=[127.0.0.1 192.168.94.2 bridge-416400 localhost minikube]
	I1213 10:25:45.144316   12024 provision.go:177] copyRemoteCerts
	I1213 10:25:45.148317   12024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:25:45.152316   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:45.203308   12024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54691 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa Username:docker}
	I1213 10:25:45.328791   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:25:45.359025   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:25:45.388619   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:25:45.422271   12024 provision.go:87] duration metric: took 368.9115ms to configureAuth
	I1213 10:25:45.422271   12024 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:25:45.423278   12024 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:25:45.427270   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:45.479279   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:45.479279   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:45.479279   12024 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:25:45.653793   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:25:45.653876   12024 ubuntu.go:71] root file system type: overlay
	I1213 10:25:45.654079   12024 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:25:45.657414   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:45.718797   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:45.718797   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:45.718797   12024 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:25:45.918268   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:25:45.922777   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:45.980760   12024 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:45.981052   12024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 54691 <nil> <nil>}
	I1213 10:25:45.981052   12024 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:25:47.525948   12024 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:25:45.907397000 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:25:47.526036   12024 machine.go:97] duration metric: took 3.2619643s to provisionDockerMachine
	I1213 10:25:47.526036   12024 client.go:176] duration metric: took 23.4298457s to LocalClient.Create
	I1213 10:25:47.526123   12024 start.go:167] duration metric: took 23.4309357s to libmachine.API.Create "bridge-416400"
	I1213 10:25:47.526191   12024 start.go:293] postStartSetup for "bridge-416400" (driver="docker")
	I1213 10:25:47.526232   12024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:25:47.530805   12024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:25:47.533852   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:47.591146   12024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54691 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa Username:docker}
	I1213 10:25:47.733779   12024 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:25:47.741777   12024 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:25:47.741777   12024 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:25:47.741777   12024 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:25:47.741777   12024 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:25:47.742784   12024 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:25:47.749457   12024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:25:47.766050   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:25:47.800577   12024 start.go:296] duration metric: took 274.3516ms for postStartSetup
	I1213 10:25:47.808823   12024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-416400
	I1213 10:25:47.865088   12024 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\config.json ...
	I1213 10:25:47.870085   12024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:25:47.873085   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:47.924090   12024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54691 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa Username:docker}
	I1213 10:25:48.052280   12024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:25:48.061981   12024 start.go:128] duration metric: took 23.9697862s to createHost
	I1213 10:25:48.061981   12024 start.go:83] releasing machines lock for "bridge-416400", held for 23.9697862s
	I1213 10:25:48.065713   12024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-416400
	I1213 10:25:48.122260   12024 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:25:48.126246   12024 ssh_runner.go:195] Run: cat /version.json
	I1213 10:25:48.126246   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:48.129256   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:48.184267   12024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54691 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa Username:docker}
	I1213 10:25:48.184267   12024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54691 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-416400\id_rsa Username:docker}
	I1213 10:25:48.593983    5076 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:25:48.594969    5076 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:25:48.594969    5076 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:25:48.594969    5076 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:25:48.594969    5076 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:25:48.594969    5076 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:25:48.597973    5076 out.go:252]   - Generating certificates and keys ...
	I1213 10:25:48.597973    5076 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:25:48.597973    5076 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:25:48.597973    5076 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:25:48.597973    5076 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:25:48.598978    5076 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:25:48.598978    5076 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:25:48.598978    5076 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:25:48.598978    5076 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:25:48.598978    5076 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:25:48.599975    5076 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:25:48.599975    5076 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:25:48.599975    5076 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:25:48.599975    5076 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:25:48.599975    5076 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:25:48.599975    5076 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:25:48.599975    5076 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:25:48.599975    5076 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:25:48.599975    5076 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:25:48.600983    5076 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:25:48.600983    5076 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:25:48.600983    5076 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:25:48.603975    5076 out.go:252]   - Booting up control plane ...
	I1213 10:25:48.603975    5076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:25:48.603975    5076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:25:48.604972    5076 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:25:48.605965    5076 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:25:48.605965    5076 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:25:48.605965    5076 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002148201s
	I1213 10:25:48.605965    5076 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:25:48.605965    5076 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:25:48.606984    5076 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:25:48.606984    5076 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:25:48.606984    5076 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 11.535692764s
	I1213 10:25:48.606984    5076 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.200563469s
	I1213 10:25:48.606984    5076 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.502219055s
	I1213 10:25:48.606984    5076 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:25:48.607970    5076 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:25:48.607970    5076 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:25:48.607970    5076 kubeadm.go:319] [mark-control-plane] Marking the node flannel-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:25:48.607970    5076 kubeadm.go:319] [bootstrap-token] Using token: elv4wv.0jpx83uda5vkz0fa
	I1213 10:25:48.613975    5076 out.go:252]   - Configuring RBAC rules ...
	I1213 10:25:48.613975    5076 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:25:48.613975    5076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:25:48.614972    5076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:25:48.614972    5076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:25:48.614972    5076 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:25:48.614972    5076 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:25:48.615976    5076 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:25:48.615976    5076 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:25:48.615976    5076 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:25:48.615976    5076 kubeadm.go:319] 
	I1213 10:25:48.615976    5076 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:25:48.615976    5076 kubeadm.go:319] 
	I1213 10:25:48.615976    5076 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:25:48.615976    5076 kubeadm.go:319] 
	I1213 10:25:48.615976    5076 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:25:48.615976    5076 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:25:48.615976    5076 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:25:48.615976    5076 kubeadm.go:319] 
	I1213 10:25:48.615976    5076 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:25:48.616974    5076 kubeadm.go:319] 
	I1213 10:25:48.616974    5076 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:25:48.616974    5076 kubeadm.go:319] 
	I1213 10:25:48.616974    5076 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:25:48.616974    5076 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:25:48.616974    5076 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:25:48.616974    5076 kubeadm.go:319] 
	I1213 10:25:48.616974    5076 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:25:48.617977    5076 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:25:48.617977    5076 kubeadm.go:319] 
	I1213 10:25:48.617977    5076 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token elv4wv.0jpx83uda5vkz0fa \
	I1213 10:25:48.617977    5076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:25:48.617977    5076 kubeadm.go:319] 	--control-plane 
	I1213 10:25:48.617977    5076 kubeadm.go:319] 
	I1213 10:25:48.617977    5076 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:25:48.617977    5076 kubeadm.go:319] 
	I1213 10:25:48.618983    5076 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token elv4wv.0jpx83uda5vkz0fa \
	I1213 10:25:48.618983    5076 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:25:48.618983    5076 cni.go:84] Creating CNI manager for "flannel"
	I1213 10:25:48.621969    5076 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1213 10:25:48.245250    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:48.264253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:48.303617    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.303617    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:48.307333    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:48.340820    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.340820    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:48.344802    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:48.381814    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.381814    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:48.385808    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:48.425330    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.425330    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:48.429331    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:48.462632    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.462632    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:48.467247    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:48.505972    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.505972    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:48.510971    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:48.538965    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.538965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:48.542968    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:48.571976    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.571976    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:48.571976    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:48.571976    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:48.639975    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:48.639975    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:48.675969    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:48.675969    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:48.764445    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:48.753486   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.754879   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.756951   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.758329   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.759587   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:48.753486   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.754879   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.756951   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.758329   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.759587   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:48.764445    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:48.764445    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:48.795028    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:48.796033    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:25:48.309326   12024 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:25:48.314336   12024 ssh_runner.go:195] Run: systemctl --version
	I1213 10:25:48.329884   12024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:25:48.339801   12024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:25:48.343801   12024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:25:48.396822   12024 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:25:48.396822   12024 start.go:496] detecting cgroup driver to use...
	I1213 10:25:48.396822   12024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:25:48.396822   12024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:25:48.415354   12024 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:25:48.415354   12024 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:25:48.431330   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:25:48.453136   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:25:48.474142   12024 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:25:48.478988   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:25:48.501966   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:25:48.522972   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:25:48.540971   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:25:48.558986   12024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:25:48.576985   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:25:48.597973   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:25:48.617977   12024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:25:48.637981   12024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:25:48.654968   12024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:25:48.670968   12024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:48.828855   12024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:25:48.945233   12024 start.go:496] detecting cgroup driver to use...
	I1213 10:25:48.945233   12024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:25:48.949232   12024 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:25:48.976906   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:25:49.005156   12024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:25:49.075037   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:25:49.099728   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:25:49.124488   12024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:25:49.156654   12024 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:25:49.169640   12024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:25:49.185627   12024 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:25:49.213161   12024 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:25:49.368475   12024 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:25:49.517574   12024 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:25:49.517811   12024 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:25:49.544774   12024 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:25:49.569809   12024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:49.708983   12024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:25:50.577006   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:25:50.601995   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:25:50.629993   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:25:50.655561   12024 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:25:50.800630   12024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:25:50.945390   12024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:51.082633   12024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:25:51.109551   12024 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:25:51.132001   12024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:51.280153   12024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:25:51.405845   12024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:25:51.425832   12024 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:25:51.429831   12024 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:25:51.437848   12024 start.go:564] Will wait 60s for crictl version
	I1213 10:25:51.441829   12024 ssh_runner.go:195] Run: which crictl
	I1213 10:25:51.452828   12024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:25:51.500684   12024 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:25:51.504832   12024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:25:51.550763   12024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:25:48.628968    5076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:25:48.635988    5076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:25:48.635988    5076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1213 10:25:48.684967    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:25:49.151640    5076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:25:49.157643    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-416400 minikube.k8s.io/updated_at=2025_12_13T10_25_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=flannel-416400 minikube.k8s.io/primary=true
	I1213 10:25:49.157643    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:49.168648    5076 ops.go:34] apiserver oom_adj: -16
	I1213 10:25:49.321215    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:49.822060    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:50.321778    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:50.820553    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:51.322520    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:51.595756   12024 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:25:51.598766   12024 cli_runner.go:164] Run: docker exec -t bridge-416400 dig +short host.docker.internal
	I1213 10:25:51.718758   12024 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:25:51.722757   12024 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:25:51.729779   12024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:51.747760   12024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-416400
	I1213 10:25:51.801767   12024 kubeadm.go:884] updating cluster {Name:bridge-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:25:51.801767   12024 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:25:51.804771   12024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:25:51.838770   12024 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:25:51.838770   12024 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:25:51.841780   12024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:25:51.873761   12024 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:25:51.873761   12024 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:25:51.873761   12024 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 docker true true} ...
	I1213 10:25:51.873761   12024 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1213 10:25:51.876772   12024 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:25:51.963821   12024 cni.go:84] Creating CNI manager for "bridge"
	I1213 10:25:51.963821   12024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:25:51.963821   12024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-416400 NodeName:bridge-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:25:51.964444   12024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "bridge-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:25:51.968992   12024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:25:51.980852   12024 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:25:51.984854   12024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:25:51.999201   12024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1213 10:25:52.021824   12024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:25:52.042734   12024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 10:25:52.069334   12024 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:25:52.077207   12024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:52.098796   12024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:52.247088   12024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:25:52.273625   12024 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400 for IP: 192.168.94.2
	I1213 10:25:52.273625   12024 certs.go:195] generating shared ca certs ...
	I1213 10:25:52.273716   12024 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.274244   12024 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:25:52.274487   12024 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:25:52.274606   12024 certs.go:257] generating profile certs ...
	I1213 10:25:52.274933   12024 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.key
	I1213 10:25:52.274933   12024 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.crt with IP's: []
	I1213 10:25:52.328051   12024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.crt ...
	I1213 10:25:52.328051   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.crt: {Name:mk6dd6b1a3469172601dbaf4e6948e4d03380de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.329571   12024 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.key ...
	I1213 10:25:52.329620   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\client.key: {Name:mk13339a6a4795945cc048c4f4e36bde570f2e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.330571   12024 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key.a45a7711
	I1213 10:25:52.330732   12024 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt.a45a7711 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 10:25:52.448409   12024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt.a45a7711 ...
	I1213 10:25:52.448409   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt.a45a7711: {Name:mkf9958569bf0e7506f035c0a08d614f3946c092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.448859   12024 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key.a45a7711 ...
	I1213 10:25:52.448859   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key.a45a7711: {Name:mk6f84afe5b5dd607545675b5b87ba15688c56c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.449839   12024 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt.a45a7711 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt
	I1213 10:25:52.464755   12024 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key.a45a7711 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key
	I1213 10:25:52.465617   12024 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.key
	I1213 10:25:52.465699   12024 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.crt with IP's: []
	I1213 10:25:52.638562   12024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.crt ...
	I1213 10:25:52.638562   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.crt: {Name:mk64bbe6688cff8ec6176f429099f3b81b02a946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.640543   12024 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.key ...
	I1213 10:25:52.640603   12024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.key: {Name:mk5d0c37ed252a3e3fcca8b390c1a86b336facbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:52.655323   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:25:52.655644   12024 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:25:52.655688   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:25:52.655914   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:25:52.656091   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:25:52.656272   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:25:52.656272   12024 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:25:52.657036   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:25:52.697603   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:25:52.726523   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:25:52.756088   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:25:52.783994   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:25:52.824203   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:25:52.856848   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:25:52.893729   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:25:52.927846   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:25:52.956472   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:25:52.984171   12024 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:25:53.011703   12024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:25:53.038039   12024 ssh_runner.go:195] Run: openssl version
	I1213 10:25:53.052090   12024 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:25:53.072133   12024 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:25:53.092158   12024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:25:53.102050   12024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:25:53.106876   12024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:25:53.158826   12024 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:25:53.176409   12024 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:25:53.201954   12024 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:25:53.222573   12024 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:25:53.240979   12024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:25:53.250901   12024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:25:53.255722   12024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:25:51.820766    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:52.322532    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:52.822022    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:53.323523    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:53.821924    5076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:25:53.974388    5076 kubeadm.go:1114] duration metric: took 4.8226777s to wait for elevateKubeSystemPrivileges
	I1213 10:25:53.974388    5076 kubeadm.go:403] duration metric: took 29.4554217s to StartCluster
	I1213 10:25:53.974388    5076 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:53.974388    5076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:25:53.976722    5076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:53.977563    5076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:25:53.977633    5076 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:25:53.977880    5076 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:25:53.977880    5076 addons.go:70] Setting storage-provisioner=true in profile "flannel-416400"
	I1213 10:25:53.977880    5076 addons.go:239] Setting addon storage-provisioner=true in "flannel-416400"
	I1213 10:25:53.977880    5076 addons.go:70] Setting default-storageclass=true in profile "flannel-416400"
	I1213 10:25:53.977880    5076 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-416400"
	I1213 10:25:53.977880    5076 host.go:66] Checking if "flannel-416400" exists ...
	I1213 10:25:53.977880    5076 config.go:182] Loaded profile config "flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:25:53.982130    5076 out.go:179] * Verifying Kubernetes components...
	I1213 10:25:53.989515    5076 cli_runner.go:164] Run: docker container inspect flannel-416400 --format={{.State.Status}}
	I1213 10:25:53.994420    5076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:53.998425    5076 cli_runner.go:164] Run: docker container inspect flannel-416400 --format={{.State.Status}}
	I1213 10:25:54.059335    5076 addons.go:239] Setting addon default-storageclass=true in "flannel-416400"
	I1213 10:25:54.059335    5076 host.go:66] Checking if "flannel-416400" exists ...
	I1213 10:25:54.060336    5076 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 10:25:51.396848    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:25:54.063334    5076 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:25:54.063334    5076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:25:54.066334    5076 cli_runner.go:164] Run: docker container inspect flannel-416400 --format={{.State.Status}}
	I1213 10:25:54.067334    5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-416400
	I1213 10:25:54.119335    5076 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:25:54.119335    5076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:25:54.120337    5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54612 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-416400\id_rsa Username:docker}
	I1213 10:25:54.123338    5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-416400
	I1213 10:25:54.175358    5076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54612 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\flannel-416400\id_rsa Username:docker}
	I1213 10:25:54.196128    5076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:25:54.491824    5076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:25:54.584424    5076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:25:54.586369    5076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:25:55.182020    5076 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:25:55.697837    5076 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-416400" context rescaled to 1 replicas
	I1213 10:25:55.736090    5076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2442488s)
	I1213 10:25:55.736090    5076 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1516495s)
	I1213 10:25:55.736090    5076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1497047s)
	I1213 10:25:55.741101    5076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" flannel-416400
	I1213 10:25:55.755777    5076 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:25:51.360604    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:51.386834    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:51.420844    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.420844    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:51.423830    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:51.454840    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.454840    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:51.457831    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:51.487200    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.487200    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:51.491050    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:51.523378    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.523378    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:51.527449    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:51.560764    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.560764    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:51.563766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:51.592770    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.592770    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:51.595756    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:51.624758    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.624758    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:51.627755    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:51.660771    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.660771    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:51.660771    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:51.660771    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:51.723762    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:51.723762    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:51.758760    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:51.758760    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:51.846775    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:51.839620   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.840595   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.841762   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.842893   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.843941   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:51.839620   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.840595   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.841762   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.842893   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.843941   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:51.846775    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:51.846775    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:51.875761    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:51.875761    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:54.430333    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:54.454434    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:54.493825    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.493825    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:54.497512    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:54.533494    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.533494    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:54.538683    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:54.564773    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.564773    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:54.568433    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:54.605317    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.605317    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:54.609855    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:54.640802    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.640802    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:54.645532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:54.677470    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.677470    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:54.683512    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:54.716223    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.716223    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:54.720204    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:54.752249    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.752295    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:54.752346    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:54.752346    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:54.824990    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:54.824990    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:54.861528    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:54.861528    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:54.947890    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:54.939731   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.941493   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.942789   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.944407   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.945507   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:54.939731   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.941493   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.942789   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.944407   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.945507   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:54.947890    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:54.947890    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:54.978543    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:54.978543    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:55.759332    5076 addons.go:530] duration metric: took 1.7814267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:25:55.799075    5076 node_ready.go:35] waiting up to 15m0s for node "flannel-416400" to be "Ready" ...
	I1213 10:25:53.304665   12024 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:25:53.324537   12024 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:25:53.347516   12024 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:53.368545   12024 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:25:53.385065   12024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:53.392706   12024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:53.397643   12024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:53.452790   12024 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:25:53.471158   12024 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:25:53.489748   12024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:25:53.497111   12024 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:25:53.497111   12024 kubeadm.go:401] StartCluster: {Name:bridge-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:53.501290   12024 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:25:53.541226   12024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:25:53.559963   12024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:25:53.572551   12024 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:25:53.579392   12024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:25:53.592454   12024 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:25:53.592512   12024 kubeadm.go:158] found existing configuration files:
	
	I1213 10:25:53.597153   12024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:25:53.613409   12024 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:25:53.618830   12024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:25:53.637502   12024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:25:53.653324   12024 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:25:53.657326   12024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:25:53.677353   12024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:25:53.692327   12024 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:25:53.696322   12024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:25:53.717323   12024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:25:53.731332   12024 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:25:53.735331   12024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:25:53.752329   12024 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:25:53.872505   12024 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:25:53.876884   12024 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:25:53.996425   12024 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1213 10:26:00.181301    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1213 10:26:00.181301    8468 node_ready.go:38] duration metric: took 6m0.0011586s for node "no-preload-803600" to be "Ready" ...
	I1213 10:26:00.184322    8468 out.go:203] 
	W1213 10:26:00.187317    8468 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:26:00.187317    8468 out.go:285] * 
	W1213 10:26:00.189310    8468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:26:00.192302    8468 out.go:203] 
	I1213 10:25:57.539868    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:57.563007    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:57.597162    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.597219    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:57.601488    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:57.633273    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.633273    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:57.637275    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:57.666277    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.666277    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:57.671269    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:57.706080    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.706080    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:57.709089    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:57.743977    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.744028    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:57.747842    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:57.776566    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.776566    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:57.780492    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:57.815351    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.815386    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:57.819106    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:57.854910    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.854910    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:57.854910    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:57.854910    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:57.917747    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:57.917747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:57.956537    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:57.956537    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:58.040821    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:58.031929   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.033046   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.035980   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.037154   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.038464   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:58.031929   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.033046   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.035980   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.037154   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.038464   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:58.040821    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:58.040821    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:58.070378    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:58.070378    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:00.628331    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:00.655322    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:00.699337    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.699337    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:00.706348    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:00.750326    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.750326    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:00.755322    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:00.800324    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.800324    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:00.805326    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:00.864335    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.864335    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:00.870325    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:00.930337    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.930337    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:00.935326    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:00.969332    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.969332    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:00.973332    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:01.016343    5404 logs.go:282] 0 containers: []
	W1213 10:26:01.016343    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:01.020342    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:01.057324    5404 logs.go:282] 0 containers: []
	W1213 10:26:01.057324    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:01.057324    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:01.057324    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:01.112326    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:01.112326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 10:25:57.805345    5076 node_ready.go:57] node "flannel-416400" has "Ready":"False" status (will retry)
	W1213 10:25:59.807380    5076 node_ready.go:57] node "flannel-416400" has "Ready":"False" status (will retry)
	
	
	==> Docker <==
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519842040Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519963651Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519978553Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519984253Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519989854Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520014956Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520057560Z" level=info msg="Initializing buildkit"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.639585638Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645206773Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645396691Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645511202Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:19:56 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645529304Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:02.619823    8379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:02.620918    8379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:02.622952    8379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:02.624786    8379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:02.625573    8379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.357488] tmpfs: Unknown parameter 'noswap'
	[  +0.731817] CPU: 6 PID: 472045 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f365e8c5b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f365e8c5af6.
	[  +0.000001] RSP: 002b:00007ffc3cf2f500 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.876043] CPU: 11 PID: 472225 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f5fc639cb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f5fc639caf6.
	[  +0.000001] RSP: 002b:00007ffdf2adf090 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +10.476539] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:26:02 up  2:02,  0 user,  load average: 5.46, 4.39, 3.78
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:25:59 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:26:00 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 13 10:26:00 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:00 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:00 no-preload-803600 kubelet[8218]: E1213 10:26:00.470052    8218 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:26:00 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:26:00 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:01 no-preload-803600 kubelet[8229]: E1213 10:26:01.171245    8229 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:01 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:01 no-preload-803600 kubelet[8258]: E1213 10:26:01.927939    8258 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:26:01 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:26:02 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 10:26:02 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:02 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:26:02 no-preload-803600 kubelet[8389]: E1213 10:26:02.680312    8389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:26:02 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:26:02 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 2 (596.274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (378.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (123.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-307000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 10:20:04.592404    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-307000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (2m0.8532224s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_8.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-307000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307000
helpers_test.go:244: (dbg) docker inspect newest-cni-307000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e",
	        "Created": "2025-12-13T10:11:37.912113644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 355235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:11:38.183095334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e-json.log",
	        "Name": "/newest-cni-307000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-307000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307000",
	                "Source": "/var/lib/docker/volumes/newest-cni-307000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307000",
	                "name.minikube.sigs.k8s.io": "newest-cni-307000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10d63c7cb118c215d26ed42a89aeec2ea240984b20e4abf3bd5096fefb5edd44",
	            "SandboxKey": "/var/run/docker/netns/10d63c7cb118",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52920"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52921"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52923"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-307000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "091d798055d24cd11a8819044665f960a2f1124bb052fb661c5793e42aeec481",
	                    "EndpointID": "c474b750c640cb16671e0143b43f227805c0724bfd0be3d318c79e885a42cae3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307000",
	                        "cc243490f404"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 6 (602.6612ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:22:01.174524    7972 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-307000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25: (1.1851566s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-416400 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-307000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain  │ newest-cni-307000         │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ ssh     │ -p kindnet-416400 sudo systemctl status docker --all --full --no-pager                                                                   │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat docker --no-pager                                                                                   │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/docker/daemon.json                                                                                       │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo docker system info                                                                                                │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cri-dockerd --version                                                                                             │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo containerd config dump                                                                                            │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │                     │
	│ ssh     │ -p kindnet-416400 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ ssh     │ -p kindnet-416400 sudo crio config                                                                                                       │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ delete  │ -p kindnet-416400                                                                                                                        │ kindnet-416400            │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ start   │ -p calico-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker                             │ calico-416400             │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-481200                                                                                                             │ kubernetes-upgrade-481200 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:20 UTC │
	│ start   │ -p custom-flannel-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker │ custom-flannel-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:20 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p custom-flannel-416400 pgrep -a kubelet                                                                                                │ custom-flannel-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:20:30
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:20:30.619911    8676 out.go:360] Setting OutFile to fd 516 ...
	I1213 10:20:30.669593    8676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:30.669593    8676 out.go:374] Setting ErrFile to fd 1872...
	I1213 10:20:30.669593    8676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:30.686239    8676 out.go:368] Setting JSON to false
	I1213 10:20:30.689588    8676 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7037,"bootTime":1765614192,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:20:30.689588    8676 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:20:30.769407    8676 out.go:179] * [custom-flannel-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:20:30.825693    8676 notify.go:221] Checking for updates...
	I1213 10:20:30.828556    8676 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:20:30.832289    8676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:20:30.835306    8676 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:20:30.837597    8676 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:20:30.864108    8676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:20:30.867430    8676 config.go:182] Loaded profile config "calico-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:20:30.868021    8676 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:20:30.868021    8676 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:20:30.868021    8676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:20:31.027327    8676 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:20:31.031328    8676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:31.310628    8676 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-13 10:20:31.293308927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:31.315628    8676 out.go:179] * Using the docker driver based on user configuration
	I1213 10:20:31.318629    8676 start.go:309] selected driver: docker
	I1213 10:20:31.318629    8676 start.go:927] validating driver "docker" against <nil>
	I1213 10:20:31.318629    8676 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:20:31.360637    8676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:31.614289    8676 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:92 SystemTime:2025-12-13 10:20:31.591544922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:31.614899    8676 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:20:31.615669    8676 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:20:31.617338    8676 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:20:31.619872    8676 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1213 10:20:31.620587    8676 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1213 10:20:31.620800    8676 start.go:353] cluster config:
	{Name:custom-flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:31.623128    8676 out.go:179] * Starting "custom-flannel-416400" primary control-plane node in "custom-flannel-416400" cluster
	I1213 10:20:31.627011    8676 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:20:31.629227    8676 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:20:31.632024    8676 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:31.632024    8676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:20:31.632245    8676 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:20:31.632245    8676 cache.go:65] Caching tarball of preloaded images
	I1213 10:20:31.632647    8676 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:20:31.632816    8676 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:20:31.633019    8676 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\config.json ...
	I1213 10:20:31.633226    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\config.json: {Name:mkf32fc66c3e6df29a020e9e53322ca2ec57fa8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:31.713662    8676 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:20:31.713662    8676 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:20:31.713662    8676 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:20:31.713662    8676 start.go:360] acquireMachinesLock for custom-flannel-416400: {Name:mk3bf1cd91ae27862ca1523fe09309e88ee2abd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:20:31.713662    8676 start.go:364] duration metric: took 0s to acquireMachinesLock for "custom-flannel-416400"
	I1213 10:20:31.713662    8676 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-416400 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:20:31.713662    8676 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:20:30.906326   12636 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (12.5349703s)
	I1213 10:20:30.906414   12636 kic.go:203] duration metric: took 12.5397094s to extract preloaded images to volume ...
	I1213 10:20:30.911288   12636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:31.171183   12636 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-13 10:20:31.153456613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:31.175182   12636 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:20:31.439176   12636 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-416400 --name calico-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-416400 --network calico-416400 --ip 192.168.94.2 --volume calico-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:20:32.168041   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Running}}
	I1213 10:20:32.232042   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:20:32.286048   12636 cli_runner.go:164] Run: docker exec calico-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:20:32.401904   12636 oci.go:144] the created container "calico-416400" has a running status.
	I1213 10:20:32.401904   12636 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa...
	I1213 10:20:32.481904   12636 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:20:32.552917   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:20:32.616847   12636 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:20:32.616847   12636 kic_runner.go:114] Args: [docker exec --privileged calico-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:20:32.733085   12636 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa...
	I1213 10:20:35.004488   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:20:35.054169   12636 machine.go:94] provisionDockerMachine start ...
	I1213 10:20:35.058082   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:35.114986   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:35.131003   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:35.131003   12636 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:20:35.316201   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-416400
	
	I1213 10:20:35.316201   12636 ubuntu.go:182] provisioning hostname "calico-416400"
	I1213 10:20:35.320565   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	W1213 10:20:30.463245    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:30.463245    8468 retry.go:31] will retry after 10.83926478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:31.079937    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:31.245972    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:31.245972    8468 retry.go:31] will retry after 20.02853134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:35.025488    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:35.127988    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:35.128994    8468 retry.go:31] will retry after 15.628436435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:31.717651    8676 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:20:31.717651    8676 start.go:159] libmachine.API.Create for "custom-flannel-416400" (driver="docker")
	I1213 10:20:31.717651    8676 client.go:173] LocalClient.Create starting
	I1213 10:20:31.717651    8676 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:20:31.718652    8676 main.go:143] libmachine: Decoding PEM data...
	I1213 10:20:31.718652    8676 main.go:143] libmachine: Parsing certificate...
	I1213 10:20:31.718652    8676 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:20:31.718652    8676 main.go:143] libmachine: Decoding PEM data...
	I1213 10:20:31.718652    8676 main.go:143] libmachine: Parsing certificate...
	I1213 10:20:31.724651    8676 cli_runner.go:164] Run: docker network inspect custom-flannel-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:20:31.782662    8676 cli_runner.go:211] docker network inspect custom-flannel-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:20:31.788653    8676 network_create.go:284] running [docker network inspect custom-flannel-416400] to gather additional debugging logs...
	I1213 10:20:31.788653    8676 cli_runner.go:164] Run: docker network inspect custom-flannel-416400
	W1213 10:20:31.851712    8676 cli_runner.go:211] docker network inspect custom-flannel-416400 returned with exit code 1
	I1213 10:20:31.851712    8676 network_create.go:287] error running [docker network inspect custom-flannel-416400]: docker network inspect custom-flannel-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-416400 not found
	I1213 10:20:31.851712    8676 network_create.go:289] output of [docker network inspect custom-flannel-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-416400 not found
	
	** /stderr **
	I1213 10:20:31.855322    8676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:20:31.928706    8676 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:31.943878    8676 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:31.959076    8676 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:31.974682    8676 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:31.990524    8676 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:32.005721    8676 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:32.019761    8676 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00189aa50}
	I1213 10:20:32.019761    8676 network_create.go:124] attempt to create docker network custom-flannel-416400 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1213 10:20:32.023021    8676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-416400 custom-flannel-416400
	W1213 10:20:32.095554    8676 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-416400 custom-flannel-416400 returned with exit code 1
	W1213 10:20:32.095554    8676 network_create.go:149] failed to create docker network custom-flannel-416400 192.168.103.0/24 with gateway 192.168.103.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-416400 custom-flannel-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:20:32.095554    8676 network_create.go:116] failed to create docker network custom-flannel-416400 192.168.103.0/24, will retry: subnet is taken
	I1213 10:20:32.127680    8676 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:20:32.145036    8676 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017c1e30}
	I1213 10:20:32.145036    8676 network_create.go:124] attempt to create docker network custom-flannel-416400 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1213 10:20:32.150567    8676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-416400 custom-flannel-416400
	I1213 10:20:32.297059    8676 network_create.go:108] docker network custom-flannel-416400 192.168.112.0/24 created
	I1213 10:20:32.297059    8676 kic.go:121] calculated static IP "192.168.112.2" for the "custom-flannel-416400" container
	I1213 10:20:32.309052    8676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:20:32.385906    8676 cli_runner.go:164] Run: docker volume create custom-flannel-416400 --label name.minikube.sigs.k8s.io=custom-flannel-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:20:32.442916    8676 oci.go:103] Successfully created a docker volume custom-flannel-416400
	I1213 10:20:32.445906    8676 cli_runner.go:164] Run: docker run --rm --name custom-flannel-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-416400 --entrypoint /usr/bin/test -v custom-flannel-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:20:33.858023    8676 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-416400 --entrypoint /usr/bin/test -v custom-flannel-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4120967s)
	I1213 10:20:33.858023    8676 oci.go:107] Successfully prepared a docker volume custom-flannel-416400
	I1213 10:20:33.858023    8676 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:33.858023    8676 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:20:33.863018    8676 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:20:35.374452   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:35.375125   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:35.375125   12636 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-416400 && echo "calico-416400" | sudo tee /etc/hostname
	I1213 10:20:35.565831   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-416400
	
	I1213 10:20:35.569969   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:35.626387   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:35.626387   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:35.626387   12636 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:20:35.808692   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:20:35.808692   12636 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:20:35.808692   12636 ubuntu.go:190] setting up certificates
	I1213 10:20:35.809228   12636 provision.go:84] configureAuth start
	I1213 10:20:35.812543   12636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-416400
	I1213 10:20:35.867214   12636 provision.go:143] copyHostCerts
	I1213 10:20:35.867214   12636 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:20:35.867214   12636 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:20:35.867214   12636 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:20:35.868806   12636 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:20:35.868806   12636 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:20:35.869397   12636 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:20:35.870246   12636 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:20:35.870246   12636 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:20:35.870864   12636 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:20:35.871815   12636 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-416400 san=[127.0.0.1 192.168.94.2 calico-416400 localhost minikube]
	I1213 10:20:35.961111   12636 provision.go:177] copyRemoteCerts
	I1213 10:20:35.964813   12636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:20:35.967973   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:36.018268   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:20:36.145843   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:20:36.178429   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:20:36.213401   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:20:36.244916   12636 provision.go:87] duration metric: took 435.6343ms to configureAuth
	I1213 10:20:36.244916   12636 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:20:36.245575   12636 config.go:182] Loaded profile config "calico-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:20:36.248844   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:36.308512   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:36.309439   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:36.309439   12636 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:20:36.499996   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:20:36.499996   12636 ubuntu.go:71] root file system type: overlay
	I1213 10:20:36.499996   12636 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:20:36.502985   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:36.557308   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:36.557984   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:36.558129   12636 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:20:36.763744   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:20:36.770107   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:36.825041   12636 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:36.825585   12636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53642 <nil> <nil>}
	I1213 10:20:36.825619   12636 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	W1213 10:20:40.316441    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:45.137023   12636 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:20:36.757057096 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:20:45.137023   12636 machine.go:97] duration metric: took 10.0827098s to provisionDockerMachine
	I1213 10:20:45.137023   12636 client.go:176] duration metric: took 28.9941269s to LocalClient.Create
	I1213 10:20:45.137023   12636 start.go:167] duration metric: took 28.9947317s to libmachine.API.Create "calico-416400"
	I1213 10:20:45.137023   12636 start.go:293] postStartSetup for "calico-416400" (driver="docker")
	I1213 10:20:45.137023   12636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:20:45.141031   12636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:20:45.144014   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:45.195018   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:20:41.307852    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:41.394384    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:41.394470    8468 retry.go:31] will retry after 13.877355955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:44.987119    8676 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (11.123942s)
	I1213 10:20:44.987119    8676 kic.go:203] duration metric: took 11.128937s to extract preloaded images to volume ...
	I1213 10:20:44.992438    8676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:45.228014    8676 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:20:45.208986824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:20:45.231022    8676 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:20:45.458202    8676 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-416400 --name custom-flannel-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-416400 --network custom-flannel-416400 --ip 192.168.112.2 --volume custom-flannel-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:20:45.335200   12636 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:20:45.342214   12636 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:20:45.342214   12636 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:20:45.342214   12636 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:20:45.342214   12636 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:20:45.343202   12636 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:20:45.347201   12636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:20:45.359206   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:20:45.387207   12636 start.go:296] duration metric: took 250.1812ms for postStartSetup
	I1213 10:20:45.392217   12636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-416400
	I1213 10:20:45.440212   12636 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\config.json ...
	I1213 10:20:45.446198   12636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:20:45.449205   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:45.497209   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:20:45.625602   12636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:20:45.640658   12636 start.go:128] duration metric: took 29.5014499s to createHost
	I1213 10:20:45.640658   12636 start.go:83] releasing machines lock for "calico-416400", held for 29.5028912s
	I1213 10:20:45.646380   12636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-416400
	I1213 10:20:45.710322   12636 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:20:45.716323   12636 ssh_runner.go:195] Run: cat /version.json
	I1213 10:20:45.716323   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:45.721318   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:45.772328   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:20:45.773317   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	W1213 10:20:45.897798   12636 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:20:45.901741   12636 ssh_runner.go:195] Run: systemctl --version
	I1213 10:20:45.919214   12636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:20:45.927335   12636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:20:45.932055   12636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:20:45.982728   12636 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:20:45.982728   12636 start.go:496] detecting cgroup driver to use...
	I1213 10:20:45.982728   12636 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:20:45.982728   12636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:20:45.991385   12636 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:20:45.991385   12636 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:20:46.120981   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:20:46.142926   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:20:46.159576   12636 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:20:46.163473   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:20:46.185844   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:20:46.203855   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:20:46.223849   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:20:46.245848   12636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:20:46.263860   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:20:46.283858   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:20:46.301852   12636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:20:46.319847   12636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:20:46.337854   12636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:20:46.355854   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:46.506519   12636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:20:46.657801   12636 start.go:496] detecting cgroup driver to use...
	I1213 10:20:46.657876   12636 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:20:46.663941   12636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:20:46.693814   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:20:46.720819   12636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:20:46.797816   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:20:46.828831   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:20:46.848828   12636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:20:46.875814   12636 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:20:46.886813   12636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:20:46.903818   12636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:20:46.929422   12636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:20:47.090217   12636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:20:47.210224   12636 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:20:47.210755   12636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:20:47.239064   12636 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:20:47.263371   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:47.422992   12636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:20:48.330664   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:20:48.354164   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:20:48.379704   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:20:48.405693   12636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:20:48.560362   12636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:20:48.728341   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:48.892621   12636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:20:48.919312   12636 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:20:48.942702   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:49.099672   12636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:20:49.208349   12636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:20:49.226339   12636 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:20:49.230335   12636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:20:49.237335   12636 start.go:564] Will wait 60s for crictl version
	I1213 10:20:49.241327   12636 ssh_runner.go:195] Run: which crictl
	I1213 10:20:49.252335   12636 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:20:49.292940   12636 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:20:49.297744   12636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:20:49.344358   12636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:20:49.387307   12636 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:20:49.390294   12636 cli_runner.go:164] Run: docker exec -t calico-416400 dig +short host.docker.internal
	I1213 10:20:49.513297   12636 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:20:49.517302   12636 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:20:49.524302   12636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:20:49.542298   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-416400
	I1213 10:20:49.593309   12636 kubeadm.go:884] updating cluster {Name:calico-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:20:49.593309   12636 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:49.596298   12636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:20:49.630409   12636 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:20:49.630409   12636 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:20:49.633407   12636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:20:49.664337   12636 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:20:49.664337   12636 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:20:49.664337   12636 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 docker true true} ...
	I1213 10:20:49.664337   12636 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1213 10:20:49.668261   12636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:20:49.772340   12636 cni.go:84] Creating CNI manager for "calico"
	I1213 10:20:49.772340   12636 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:20:49.772340   12636 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-416400 NodeName:calico-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:20:49.772340   12636 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:20:49.776340   12636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:20:49.788345   12636 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:20:49.792342   12636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:20:49.805346   12636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1213 10:20:49.828007   12636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:20:49.846005   12636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 10:20:49.868001   12636 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:20:49.874998   12636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:20:49.894417   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:50.070160   12636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:20:50.092550   12636 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400 for IP: 192.168.94.2
	I1213 10:20:50.092603   12636 certs.go:195] generating shared ca certs ...
	I1213 10:20:50.092647   12636 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.093018   12636 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:20:50.093018   12636 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:20:50.093018   12636 certs.go:257] generating profile certs ...
	I1213 10:20:50.093742   12636 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.key
	I1213 10:20:50.093742   12636 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.crt with IP's: []
	I1213 10:20:50.299739   12636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.crt ...
	I1213 10:20:50.299739   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.crt: {Name:mkab618e0027ece3fb4721ec4853cfcf38438b1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.301138   12636 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.key ...
	I1213 10:20:50.301138   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\client.key: {Name:mkf5ce12816bfa36773929e4a6647b9a995051ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.301996   12636 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key.c0b96fc3
	I1213 10:20:50.301996   12636 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt.c0b96fc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	W1213 10:20:50.349344    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:46.178845    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Running}}
	I1213 10:20:46.238849    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:20:46.293844    8676 cli_runner.go:164] Run: docker exec custom-flannel-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:20:46.408241    8676 oci.go:144] the created container "custom-flannel-416400" has a running status.
	I1213 10:20:46.408241    8676 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa...
	I1213 10:20:46.704814    8676 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:20:46.779813    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:20:46.848828    8676 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:20:46.848828    8676 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:20:46.963434    8676 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa...
	I1213 10:20:49.152334    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:20:49.202336    8676 machine.go:94] provisionDockerMachine start ...
	I1213 10:20:49.206338    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:49.259328    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:49.273208    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:49.273208    8676 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:20:49.444319    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-416400
	
	I1213 10:20:49.444319    8676 ubuntu.go:182] provisioning hostname "custom-flannel-416400"
	I1213 10:20:49.448318    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:49.499297    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:49.499297    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:49.499297    8676 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-416400 && echo "custom-flannel-416400" | sudo tee /etc/hostname
	I1213 10:20:49.688548    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-416400
	
	I1213 10:20:49.694619    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:49.765350    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:49.766351    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:49.766351    8676 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:20:49.937326    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:20:49.937326    8676 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:20:49.937879    8676 ubuntu.go:190] setting up certificates
	I1213 10:20:49.937941    8676 provision.go:84] configureAuth start
	I1213 10:20:49.941443    8676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-416400
	I1213 10:20:49.997299    8676 provision.go:143] copyHostCerts
	I1213 10:20:49.998302    8676 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:20:49.998302    8676 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:20:49.998302    8676 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:20:49.999649    8676 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:20:49.999698    8676 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:20:50.000037    8676 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:20:50.000818    8676 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:20:50.000818    8676 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:20:50.000818    8676 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:20:50.001421    8676 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-416400 san=[127.0.0.1 192.168.112.2 custom-flannel-416400 localhost minikube]
	I1213 10:20:50.162598    8676 provision.go:177] copyRemoteCerts
	I1213 10:20:50.167582    8676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:20:50.170602    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:50.230662    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:20:50.364536    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:20:50.399384    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 10:20:50.426029    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:20:50.453797    8676 provision.go:87] duration metric: took 515.8271ms to configureAuth
	I1213 10:20:50.453862    8676 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:20:50.454298    8676 config.go:182] Loaded profile config "custom-flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:20:50.458055    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:50.518167    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:50.519123    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:50.519158    8676 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:20:50.370517   12636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt.c0b96fc3 ...
	I1213 10:20:50.370517   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt.c0b96fc3: {Name:mkeba50655b36824fb3a63608aa789f0d36ea670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.371250   12636 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key.c0b96fc3 ...
	I1213 10:20:50.371250   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key.c0b96fc3: {Name:mk99efbf2a2c96477ab38fca25bf66862bd8cdcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.371900   12636 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt.c0b96fc3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt
	I1213 10:20:50.387349   12636 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key.c0b96fc3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key
	I1213 10:20:50.387962   12636 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.key
	I1213 10:20:50.387962   12636 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.crt with IP's: []
	I1213 10:20:50.452382   12636 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.crt ...
	I1213 10:20:50.452382   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.crt: {Name:mkaa23201b5fd3c122d299a31df7405b1422d8b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.453696   12636 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.key ...
	I1213 10:20:50.453775   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.key: {Name:mk61a7e12144a4266881a2bfb91a36735dc11833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:50.468355   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:20:50.468355   12636 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:20:50.468355   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:20:50.468355   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:20:50.468355   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:20:50.468355   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:20:50.469591   12636 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:20:50.470506   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:20:50.508936   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:20:50.546581   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:20:50.575172   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:20:50.605414   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:20:50.634430   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:20:50.663584   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:20:50.698587   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:20:50.736657   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:20:50.775683   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:20:50.802704   12636 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:20:50.835702   12636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:20:50.866060   12636 ssh_runner.go:195] Run: openssl version
	I1213 10:20:50.881301   12636 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:20:50.903334   12636 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:20:50.921939   12636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:20:50.930110   12636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:20:50.934106   12636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:20:50.986712   12636 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:20:51.002703   12636 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:20:51.018708   12636 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:20:51.048371   12636 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:20:51.069945   12636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:20:51.077081   12636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:20:51.082548   12636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:20:51.135289   12636 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:20:51.152688   12636 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:20:51.168996   12636 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:51.188090   12636 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:20:51.207868   12636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:51.217229   12636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:51.221672   12636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:51.270269   12636 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:20:51.290275   12636 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:20:51.310495   12636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:20:51.321503   12636 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:20:51.322725   12636 kubeadm.go:401] StartCluster: {Name:calico-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:51.326160   12636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:20:51.363326   12636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:20:51.383184   12636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:20:51.399454   12636 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:20:51.403699   12636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:20:51.416660   12636 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:20:51.416660   12636 kubeadm.go:158] found existing configuration files:
	
	I1213 10:20:51.421297   12636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:20:51.436755   12636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:20:51.440904   12636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:20:51.457101   12636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:20:51.470676   12636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:20:51.474781   12636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:20:51.494151   12636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:20:51.506155   12636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:20:51.510151   12636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:20:51.525152   12636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:20:51.537153   12636 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:20:51.541152   12636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:20:51.560191   12636 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:20:51.700616   12636 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:20:51.705867   12636 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:20:51.804636   12636 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:20:50.762129    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:20:50.866259    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:50.866874    8468 retry.go:31] will retry after 28.312317849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:51.279935    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:20:51.378474    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:51.378474    8468 retry.go:31] will retry after 11.232412768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:55.277503    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:20:55.359684    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:55.359799    8468 retry.go:31] will retry after 25.131057199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:20:50.697799    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:20:50.697799    8676 ubuntu.go:71] root file system type: overlay
	I1213 10:20:50.697799    8676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:20:50.701523    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:50.764029    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:50.764151    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:50.764151    8676 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:20:50.971710    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:20:50.974712    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:51.026113    8676 main.go:143] libmachine: Using SSH client type: native
	I1213 10:20:51.026625    8676 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53687 <nil> <nil>}
	I1213 10:20:51.026625    8676 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:20:52.635587    8676 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:20:50.958431900 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:20:52.636111    8676 machine.go:97] duration metric: took 3.4337262s to provisionDockerMachine
	I1213 10:20:52.636164    8676 client.go:176] duration metric: took 20.9181613s to LocalClient.Create
	I1213 10:20:52.636164    8676 start.go:167] duration metric: took 20.9182143s to libmachine.API.Create "custom-flannel-416400"
	I1213 10:20:52.636164    8676 start.go:293] postStartSetup for "custom-flannel-416400" (driver="docker")
	I1213 10:20:52.636220    8676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:20:52.641805    8676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:20:52.645175    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:52.698388    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:20:52.828936    8676 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:20:52.835862    8676 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:20:52.835862    8676 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:20:52.835862    8676 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:20:52.835862    8676 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:20:52.836583    8676 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:20:52.841632    8676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:20:52.855301    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:20:52.886381    8676 start.go:296] duration metric: took 250.2133ms for postStartSetup
	I1213 10:20:52.891977    8676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-416400
	I1213 10:20:52.946233    8676 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\config.json ...
	I1213 10:20:52.952133    8676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:20:52.955735    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:53.008196    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:20:53.146260    8676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:20:53.158519    8676 start.go:128] duration metric: took 21.444551s to createHost
	I1213 10:20:53.158519    8676 start.go:83] releasing machines lock for "custom-flannel-416400", held for 21.444551s
	I1213 10:20:53.163081    8676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-416400
	I1213 10:20:53.218077    8676 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:20:53.221077    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:53.221077    8676 ssh_runner.go:195] Run: cat /version.json
	I1213 10:20:53.224076    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:53.271074    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:20:53.272073    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	W1213 10:20:53.386600    8676 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:20:53.391888    8676 ssh_runner.go:195] Run: systemctl --version
	I1213 10:20:53.409149    8676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:20:53.417239    8676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:20:53.422116    8676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:20:53.481504    8676 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:20:53.481504    8676 start.go:496] detecting cgroup driver to use...
	I1213 10:20:53.481504    8676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:20:53.481504    8676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:20:53.483594    8676 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:20:53.483594    8676 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:20:53.510294    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:20:53.529097    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:20:53.543716    8676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:20:53.547721    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:20:53.567952    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:20:53.588312    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:20:53.609786    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:20:53.630394    8676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:20:53.650988    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:20:53.672276    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:20:53.690644    8676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:20:53.712962    8676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:20:53.732491    8676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:20:53.752147    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:53.891743    8676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:20:54.050200    8676 start.go:496] detecting cgroup driver to use...
	I1213 10:20:54.050200    8676 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:20:54.054419    8676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:20:54.080187    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:20:54.103227    8676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:20:54.169366    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:20:54.193519    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:20:54.214383    8676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:20:54.243971    8676 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:20:54.255515    8676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:20:54.270109    8676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:20:54.295684    8676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:20:54.436353    8676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:20:54.565700    8676 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:20:54.565824    8676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:20:54.593351    8676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:20:54.615555    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:54.766756    8676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:20:55.721964    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:20:55.747063    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:20:55.769526    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:20:55.791991    8676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:20:55.960853    8676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:20:56.132322    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:56.279635    8676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:20:56.306697    8676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:20:56.329109    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:56.483808    8676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:20:56.588982    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:20:56.611750    8676 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:20:56.616520    8676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:20:56.623740    8676 start.go:564] Will wait 60s for crictl version
	I1213 10:20:56.627983    8676 ssh_runner.go:195] Run: which crictl
	I1213 10:20:56.639262    8676 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:20:56.685708    8676 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:20:56.689533    8676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:20:56.731205    8676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1213 10:21:00.383691    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:20:56.771719    8676 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:20:56.775210    8676 cli_runner.go:164] Run: docker exec -t custom-flannel-416400 dig +short host.docker.internal
	I1213 10:20:56.922453    8676 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:20:56.927223    8676 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:20:56.937943    8676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:20:56.957915    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:20:57.008626    8676 kubeadm.go:884] updating cluster {Name:custom-flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-416400 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disable
CoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:20:57.008626    8676 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:20:57.011618    8676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:20:57.045552    8676 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:20:57.045552    8676 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:20:57.048548    8676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:20:57.082935    8676 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:20:57.082935    8676 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:20:57.082935    8676 kubeadm.go:935] updating node { 192.168.112.2 8443 v1.34.2 docker true true} ...
	I1213 10:20:57.082935    8676 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1213 10:20:57.086931    8676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:20:57.168854    8676 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1213 10:20:57.168854    8676 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:20:57.168854    8676 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-416400 NodeName:custom-flannel-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:20:57.168854    8676 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.112.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:20:57.173992    8676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:20:57.186385    8676 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:20:57.190287    8676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:20:57.205108    8676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1213 10:20:57.227337    8676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:20:57.248733    8676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 10:20:57.276005    8676 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:20:57.282857    8676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:20:57.302789    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:20:57.457294    8676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:20:57.479960    8676 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400 for IP: 192.168.112.2
	I1213 10:20:57.479960    8676 certs.go:195] generating shared ca certs ...
	I1213 10:20:57.479960    8676 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.480827    8676 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:20:57.480876    8676 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:20:57.480876    8676 certs.go:257] generating profile certs ...
	I1213 10:20:57.481536    8676 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.key
	I1213 10:20:57.481536    8676 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.crt with IP's: []
	I1213 10:20:57.552695    8676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.crt ...
	I1213 10:20:57.553696    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.crt: {Name:mkf18ff3c046562fb868b06a161748eb1515ba84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.554718    8676 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.key ...
	I1213 10:20:57.554718    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\client.key: {Name:mk40591cd482541a778a70f12bbec9205d33a63f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.555689    8676 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key.4729cfa6
	I1213 10:20:57.555689    8676 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt.4729cfa6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.112.2]
	I1213 10:20:57.649395    8676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt.4729cfa6 ...
	I1213 10:20:57.650387    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt.4729cfa6: {Name:mk80dfa8f2ebcf362af3f775e7fc33321136d54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.651021    8676 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key.4729cfa6 ...
	I1213 10:20:57.651021    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key.4729cfa6: {Name:mkb5bf0f9d483e98217444435ecfa290ac676f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.651640    8676 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt.4729cfa6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt
	I1213 10:20:57.667886    8676 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key.4729cfa6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key
	I1213 10:20:57.669230    8676 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.key
	I1213 10:20:57.669230    8676 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.crt with IP's: []
	I1213 10:20:57.697843    8676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.crt ...
	I1213 10:20:57.697843    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.crt: {Name:mk9c8bf5327c5a34604168d391a70a9f788d2948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.698423    8676 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.key ...
	I1213 10:20:57.699418    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.key: {Name:mk6b1d8518a75da38b2a8bbf8ded4e0be91722b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:20:57.713430    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:20:57.714427    8676 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:20:57.714427    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:20:57.714427    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:20:57.714427    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:20:57.714427    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:20:57.714427    8676 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:20:57.715429    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:20:57.744757    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:20:57.776831    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:20:57.807946    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:20:57.835620    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 10:20:57.870947    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:20:57.901269    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:20:57.933248    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:20:57.960843    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:20:57.990407    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:20:58.020906    8676 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:20:58.056157    8676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:20:58.079514    8676 ssh_runner.go:195] Run: openssl version
	I1213 10:20:58.094342    8676 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:20:58.113318    8676 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:20:58.135878    8676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:20:58.146759    8676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:20:58.151618    8676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:20:58.201339    8676 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:20:58.219305    8676 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:20:58.240032    8676 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:58.259848    8676 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:20:58.275861    8676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:58.283858    8676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:58.287848    8676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:20:58.335764    8676 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:20:58.353987    8676 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:20:58.373530    8676 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:20:58.397515    8676 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:20:58.418649    8676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:20:58.427639    8676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:20:58.432647    8676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:20:58.491619    8676 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:20:58.509625    8676 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:20:58.527625    8676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:20:58.534627    8676 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:20:58.534627    8676 kubeadm.go:401] StartCluster: {Name:custom-flannel-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCor
eDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:58.539619    8676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:20:58.571608    8676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:20:58.588609    8676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:20:58.601624    8676 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:20:58.606625    8676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:20:58.624438    8676 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:20:58.624438    8676 kubeadm.go:158] found existing configuration files:
	
	I1213 10:20:58.628441    8676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:20:58.641435    8676 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:20:58.644434    8676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:20:58.660435    8676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:20:58.674798    8676 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:20:58.678903    8676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:20:58.697316    8676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:20:58.715644    8676 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:20:58.719715    8676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:20:58.736703    8676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:20:58.748701    8676 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:20:58.752700    8676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:20:58.768700    8676 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:20:58.886103    8676 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:20:58.891182    8676 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:20:58.987554    8676 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:21:02.617356    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:21:02.697259    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:02.697259    8468 retry.go:31] will retry after 17.55495334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:05.821321   12636 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:21:05.821446   12636 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:21:05.821573   12636 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:21:05.821573   12636 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:21:05.821573   12636 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:21:05.822253   12636 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:21:05.824131   12636 out.go:252]   - Generating certificates and keys ...
	I1213 10:21:05.824376   12636 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:21:05.824376   12636 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:21:05.824376   12636 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:21:05.824376   12636 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:21:05.825028   12636 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:21:05.825133   12636 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:21:05.825273   12636 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:21:05.825432   12636 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-416400 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 10:21:05.825432   12636 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:21:05.825432   12636 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-416400 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 10:21:05.825964   12636 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:21:05.826125   12636 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:21:05.826204   12636 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:21:05.826389   12636 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:21:05.826578   12636 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:21:05.826660   12636 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:21:05.826660   12636 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:21:05.826660   12636 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:21:05.826660   12636 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:21:05.827345   12636 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:21:05.827473   12636 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:21:05.831100   12636 out.go:252]   - Booting up control plane ...
	I1213 10:21:05.831100   12636 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:21:05.831700   12636 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:21:05.831700   12636 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:21:05.831700   12636 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:21:05.832290   12636 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:21:05.832290   12636 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:21:05.832290   12636 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:21:05.832824   12636 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:21:05.832947   12636 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:21:05.832947   12636 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:21:05.832947   12636 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501859997s
	I1213 10:21:05.832947   12636 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:21:05.832947   12636 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1213 10:21:05.833874   12636 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:21:05.833874   12636 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:21:05.833874   12636 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.804160382s
	I1213 10:21:05.833874   12636 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.713979854s
	I1213 10:21:05.833874   12636 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002501417s
	I1213 10:21:05.833874   12636 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:21:05.834874   12636 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:21:05.834874   12636 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:21:05.834874   12636 kubeadm.go:319] [mark-control-plane] Marking the node calico-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:21:05.834874   12636 kubeadm.go:319] [bootstrap-token] Using token: 1sipn2.ecu3she5sci9wrkf
	I1213 10:21:05.851872   12636 out.go:252]   - Configuring RBAC rules ...
	I1213 10:21:05.852928   12636 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:21:05.853046   12636 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:21:05.853046   12636 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:21:05.853632   12636 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:21:05.853730   12636 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:21:05.853730   12636 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:21:05.854343   12636 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:21:05.854456   12636 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:21:05.854564   12636 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:21:05.854564   12636 kubeadm.go:319] 
	I1213 10:21:05.854564   12636 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:21:05.854716   12636 kubeadm.go:319] 
	I1213 10:21:05.854845   12636 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:21:05.854845   12636 kubeadm.go:319] 
	I1213 10:21:05.854845   12636 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:21:05.854845   12636 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:21:05.854845   12636 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:21:05.854845   12636 kubeadm.go:319] 
	I1213 10:21:05.854845   12636 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:21:05.855367   12636 kubeadm.go:319] 
	I1213 10:21:05.855433   12636 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:21:05.855433   12636 kubeadm.go:319] 
	I1213 10:21:05.855433   12636 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:21:05.855433   12636 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:21:05.855433   12636 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:21:05.855433   12636 kubeadm.go:319] 
	I1213 10:21:05.856019   12636 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:21:05.856019   12636 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:21:05.856019   12636 kubeadm.go:319] 
	I1213 10:21:05.856019   12636 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1sipn2.ecu3she5sci9wrkf \
	I1213 10:21:05.856615   12636 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:21:05.856713   12636 kubeadm.go:319] 	--control-plane 
	I1213 10:21:05.856713   12636 kubeadm.go:319] 
	I1213 10:21:05.856713   12636 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:21:05.856713   12636 kubeadm.go:319] 
	I1213 10:21:05.856713   12636 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1sipn2.ecu3she5sci9wrkf \
	I1213 10:21:05.856713   12636 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:21:05.857272   12636 cni.go:84] Creating CNI manager for "calico"
	I1213 10:21:05.861045   12636 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1213 10:21:05.864034   12636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:21:05.864034   12636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1213 10:21:05.891023   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:21:08.280623   12636 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.3895654s)
	I1213 10:21:08.280623   12636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:21:08.285584   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:08.286584   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-416400 minikube.k8s.io/updated_at=2025_12_13T10_21_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=calico-416400 minikube.k8s.io/primary=true
	I1213 10:21:08.295480   12636 ops.go:34] apiserver oom_adj: -16
	I1213 10:21:08.435137   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:08.936517   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:09.435935   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:09.936711   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:10.435150   12636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:10.538709   12636 kubeadm.go:1114] duration metric: took 2.2580541s to wait for elevateKubeSystemPrivileges
	I1213 10:21:10.539227   12636 kubeadm.go:403] duration metric: took 19.2162468s to StartCluster
	I1213 10:21:10.539282   12636 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:10.539378   12636 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:21:10.540698   12636 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:10.541829   12636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:21:10.541829   12636 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:21:10.541829   12636 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:21:10.541829   12636 addons.go:70] Setting storage-provisioner=true in profile "calico-416400"
	I1213 10:21:10.541829   12636 addons.go:239] Setting addon storage-provisioner=true in "calico-416400"
	I1213 10:21:10.541829   12636 addons.go:70] Setting default-storageclass=true in profile "calico-416400"
	I1213 10:21:10.541829   12636 host.go:66] Checking if "calico-416400" exists ...
	I1213 10:21:10.541829   12636 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-416400"
	I1213 10:21:10.541829   12636 config.go:182] Loaded profile config "calico-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:21:10.544340   12636 out.go:179] * Verifying Kubernetes components...
	I1213 10:21:10.552808   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:21:10.553805   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:21:10.557791   12636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:21:10.618760   12636 addons.go:239] Setting addon default-storageclass=true in "calico-416400"
	I1213 10:21:10.619758   12636 host.go:66] Checking if "calico-416400" exists ...
	I1213 10:21:10.619758   12636 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:21:10.621766   12636 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:21:10.621766   12636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:21:10.625757   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:21:10.626766   12636 cli_runner.go:164] Run: docker container inspect calico-416400 --format={{.State.Status}}
	I1213 10:21:10.682775   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:21:10.685769   12636 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:21:10.685769   12636 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:21:10.689756   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-416400
	I1213 10:21:10.721775   12636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:21:10.746751   12636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53642 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-416400\id_rsa Username:docker}
	I1213 10:21:11.016697   12636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:21:11.223673   12636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:21:11.429071   12636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:21:11.909679   12636 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1877745s)
	I1213 10:21:11.909713   12636 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:21:11.914689   12636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-416400
	I1213 10:21:11.974092   12636 node_ready.go:35] waiting up to 15m0s for node "calico-416400" to be "Ready" ...
	I1213 10:21:12.419917   12636 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-416400" context rescaled to 1 replicas
	I1213 10:21:12.544963   12636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3212717s)
	I1213 10:21:12.544963   12636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1158768s)
	I1213 10:21:12.611484   12636 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:21:12.615123   12636 addons.go:530] duration metric: took 2.0732643s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1213 10:21:13.980484   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	W1213 10:21:10.425009    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:15.523071    8676 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:21:15.523071    8676 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:21:15.524086    8676 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:21:15.524086    8676 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:21:15.524086    8676 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:21:15.524086    8676 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:21:15.528086    8676 out.go:252]   - Generating certificates and keys ...
	I1213 10:21:15.528086    8676 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:21:15.528086    8676 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:21:15.528086    8676 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:21:15.528086    8676 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:21:15.529079    8676 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:21:15.529079    8676 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:21:15.529079    8676 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:21:15.529079    8676 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-416400 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1213 10:21:15.529079    8676 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:21:15.530064    8676 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-416400 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1213 10:21:15.530064    8676 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:21:15.530064    8676 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:21:15.530064    8676 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:21:15.530064    8676 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:21:15.531078    8676 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:21:15.531078    8676 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:21:15.531078    8676 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:21:15.531078    8676 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:21:15.531078    8676 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:21:15.531078    8676 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:21:15.532075    8676 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:21:15.535071    8676 out.go:252]   - Booting up control plane ...
	I1213 10:21:15.535071    8676 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:21:15.535071    8676 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:21:15.535071    8676 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:21:15.535071    8676 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:21:15.536073    8676 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:21:15.536073    8676 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:21:15.536073    8676 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:21:15.536073    8676 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:21:15.536073    8676 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:21:15.537077    8676 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:21:15.537077    8676 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001915601s
	I1213 10:21:15.537077    8676 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:21:15.537077    8676 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.112.2:8443/livez
	I1213 10:21:15.538074    8676 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:21:15.538074    8676 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:21:15.538074    8676 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.207343037s
	I1213 10:21:15.538074    8676 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.420290847s
	I1213 10:21:15.538074    8676 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.003073712s
	I1213 10:21:15.539069    8676 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:21:15.539069    8676 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:21:15.539069    8676 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:21:15.539069    8676 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:21:15.540086    8676 kubeadm.go:319] [bootstrap-token] Using token: duxwom.iifbcz3cuw2bmyn2
	I1213 10:21:15.542072    8676 out.go:252]   - Configuring RBAC rules ...
	I1213 10:21:15.542072    8676 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:21:15.543068    8676 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:21:15.543068    8676 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:21:15.543068    8676 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:21:15.544082    8676 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:21:15.544082    8676 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:21:15.544082    8676 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:21:15.544082    8676 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:21:15.544082    8676 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:21:15.545079    8676 kubeadm.go:319] 
	I1213 10:21:15.545079    8676 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:21:15.545079    8676 kubeadm.go:319] 
	I1213 10:21:15.545079    8676 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:21:15.545079    8676 kubeadm.go:319] 
	I1213 10:21:15.545079    8676 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:21:15.545079    8676 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:21:15.545079    8676 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:21:15.545079    8676 kubeadm.go:319] 
	I1213 10:21:15.545079    8676 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:21:15.545079    8676 kubeadm.go:319] 
	I1213 10:21:15.546068    8676 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:21:15.546068    8676 kubeadm.go:319] 
	I1213 10:21:15.546068    8676 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:21:15.546068    8676 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:21:15.546068    8676 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:21:15.546068    8676 kubeadm.go:319] 
	I1213 10:21:15.546068    8676 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:21:15.547072    8676 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:21:15.547072    8676 kubeadm.go:319] 
	I1213 10:21:15.547072    8676 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token duxwom.iifbcz3cuw2bmyn2 \
	I1213 10:21:15.547072    8676 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:21:15.547072    8676 kubeadm.go:319] 	--control-plane 
	I1213 10:21:15.547072    8676 kubeadm.go:319] 
	I1213 10:21:15.547072    8676 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:21:15.547072    8676 kubeadm.go:319] 
	I1213 10:21:15.548092    8676 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token duxwom.iifbcz3cuw2bmyn2 \
	I1213 10:21:15.548092    8676 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:21:15.548092    8676 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1213 10:21:15.552077    8676 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1213 10:21:15.575073    8676 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:21:15.579084    8676 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1213 10:21:15.601098    8676 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1213 10:21:15.602073    8676 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	W1213 10:21:15.981084   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	W1213 10:21:18.499138   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	I1213 10:21:19.184086    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:21:19.276360    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:19.276360    8468 retry.go:31] will retry after 37.24462991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:20.257313    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:21:20.340817    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:20.340817    8468 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:21:15.716097    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:21:16.338040    8676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:21:16.344053    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:16.345045    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-416400 minikube.k8s.io/updated_at=2025_12_13T10_21_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=custom-flannel-416400 minikube.k8s.io/primary=true
	I1213 10:21:16.360061    8676 ops.go:34] apiserver oom_adj: -16
	I1213 10:21:16.519051    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:17.017574    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:17.518966    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:18.018141    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:18.518440    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:19.017636    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:19.517806    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:20.017480    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:20.518875    8676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:21:20.979985    8676 kubeadm.go:1114] duration metric: took 4.6418787s to wait for elevateKubeSystemPrivileges
	I1213 10:21:20.979985    8676 kubeadm.go:403] duration metric: took 22.4450379s to StartCluster
	I1213 10:21:20.979985    8676 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:20.980770    8676 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:21:20.982756    8676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:20.984291    8676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:21:20.984526    8676 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:21:20.984462    8676 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:21:20.984828    8676 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-416400"
	I1213 10:21:20.984828    8676 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-416400"
	I1213 10:21:20.984955    8676 host.go:66] Checking if "custom-flannel-416400" exists ...
	I1213 10:21:20.985087    8676 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-416400"
	I1213 10:21:20.985087    8676 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-416400"
	I1213 10:21:20.985249    8676 config.go:182] Loaded profile config "custom-flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:21:20.994271    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:21:20.994307    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:21:21.052833    8676 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-416400"
	I1213 10:21:21.052833    8676 host.go:66] Checking if "custom-flannel-416400" exists ...
	I1213 10:21:21.073829    8676 out.go:179] * Verifying Kubernetes components...
	I1213 10:21:21.060831    8676 cli_runner.go:164] Run: docker container inspect custom-flannel-416400 --format={{.State.Status}}
	I1213 10:21:21.106578    8676 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:21:21.125582    8676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:21:21.125582    8676 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:21:21.125582    8676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:21:21.128579    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:21:21.177579    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:21:21.178582    8676 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:21:21.178582    8676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:21:21.181594    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:21:21.235584    8676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53687 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-416400\id_rsa Username:docker}
	I1213 10:21:21.305614    8676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:21:21.326368    8676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:21:21.330483    8676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:21:21.520267    8676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:21:22.040372    8676 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:21:22.049650    8676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-416400
	I1213 10:21:22.111030    8676 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-416400" to be "Ready" ...
	I1213 10:21:22.241812    8676 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1213 10:21:20.982756   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	W1213 10:21:22.993396   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	W1213 10:21:20.463313    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:20.495524    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:21:20.586987    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:20.586987    8468 retry.go:31] will retry after 24.857879765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:21:22.246806    8676 addons.go:530] duration metric: took 1.2613285s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 10:21:22.552644    8676 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-416400" context rescaled to 1 replicas
	W1213 10:21:24.118084    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	W1213 10:21:25.480070   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	W1213 10:21:27.979832   12636 node_ready.go:57] node "calico-416400" has "Ready":"False" status (will retry)
	I1213 10:21:29.510865   12636 node_ready.go:49] node "calico-416400" is "Ready"
	I1213 10:21:29.510865   12636 node_ready.go:38] duration metric: took 17.5365213s for node "calico-416400" to be "Ready" ...
	I1213 10:21:29.510865   12636 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:21:29.515854   12636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:21:29.539851   12636 api_server.go:72] duration metric: took 18.99775s to wait for apiserver process to appear ...
	I1213 10:21:29.539851   12636 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:21:29.539851   12636 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53646/healthz ...
	I1213 10:21:29.548884   12636 api_server.go:279] https://127.0.0.1:53646/healthz returned 200:
	ok
	I1213 10:21:29.551871   12636 api_server.go:141] control plane version: v1.34.2
	I1213 10:21:29.551871   12636 api_server.go:131] duration metric: took 12.0198ms to wait for apiserver health ...
	I1213 10:21:29.551871   12636 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:21:29.559878   12636 system_pods.go:59] 9 kube-system pods found
	I1213 10:21:29.559878   12636 system_pods.go:61] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:29.559878   12636 system_pods.go:61] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:29.559878   12636 system_pods.go:61] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:29.559878   12636 system_pods.go:61] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:29.559878   12636 system_pods.go:61] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:29.559878   12636 system_pods.go:61] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:29.559878   12636 system_pods.go:61] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:29.559878   12636 system_pods.go:61] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:29.559878   12636 system_pods.go:61] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:29.559878   12636 system_pods.go:74] duration metric: took 8.0068ms to wait for pod list to return data ...
	I1213 10:21:29.559878   12636 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:21:29.565888   12636 default_sa.go:45] found service account: "default"
	I1213 10:21:29.565888   12636 default_sa.go:55] duration metric: took 6.0105ms for default service account to be created ...
	I1213 10:21:29.565888   12636 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:21:29.573862   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:29.573862   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:29.573862   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:29.573862   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:29.573862   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:29.573862   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:29.573862   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:29.573862   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:29.573862   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:29.573862   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:29.573862   12636 retry.go:31] will retry after 250.191565ms: missing components: kube-dns
	I1213 10:21:29.833431   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:29.833431   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:29.833431   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:29.833431   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:29.833431   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:29.833431   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:29.833431   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:29.833431   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:29.833431   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:29.833431   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:29.833431   12636 retry.go:31] will retry after 372.566993ms: missing components: kube-dns
	I1213 10:21:30.213425   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:30.213425   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:30.213425   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:30.213425   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:30.213425   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:30.213425   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:30.213425   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:30.213425   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:30.213425   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:30.213425   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:30.213425   12636 retry.go:31] will retry after 341.141837ms: missing components: kube-dns
	W1213 10:21:26.617920    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	W1213 10:21:29.117861    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	I1213 10:21:30.562431   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:30.562431   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:30.562431   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:30.562431   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:30.562431   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:30.562431   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:30.562431   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:30.562431   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:30.562431   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:30.562431   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:30.562431   12636 retry.go:31] will retry after 412.993353ms: missing components: kube-dns
	I1213 10:21:30.982436   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:30.982436   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:30.982436   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:30.982436   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:30.982436   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:30.982436   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:30.982436   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:30.982436   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:30.982436   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:30.982436   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:30.982436   12636 retry.go:31] will retry after 654.03302ms: missing components: kube-dns
	I1213 10:21:31.700047   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:31.700047   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:31.700047   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:31.700047   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:31.700047   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:31.701019   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:31.701019   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:31.701019   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:31.701019   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:31.701019   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:31.701019   12636 retry.go:31] will retry after 915.914519ms: missing components: kube-dns
	I1213 10:21:32.625223   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:32.625223   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:32.625223   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:32.625223   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:32.625223   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:32.625223   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:32.625223   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:32.625223   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:32.625223   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:32.625223   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:32.625223   12636 retry.go:31] will retry after 834.721838ms: missing components: kube-dns
	I1213 10:21:33.470123   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:33.470123   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:33.470123   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:33.470123   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:33.470123   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:33.470123   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:33.470123   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:33.470123   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:33.470123   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:33.470123   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:33.470123   12636 retry.go:31] will retry after 940.517143ms: missing components: kube-dns
	I1213 10:21:34.423761   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:34.423761   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:34.423761   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:34.423761   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:34.423761   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:34.423761   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:34.423761   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:34.423761   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:34.423761   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:34.423761   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:34.423761   12636 retry.go:31] will retry after 1.722066629s: missing components: kube-dns
	W1213 10:21:30.494436    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	W1213 10:21:31.621005    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	W1213 10:21:34.117195    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	I1213 10:21:36.153019   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:36.153019   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:36.153019   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:36.153019   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:36.153019   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:36.153019   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:36.153019   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:36.153019   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:36.153019   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:36.153019   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:36.153019   12636 retry.go:31] will retry after 1.964729107s: missing components: kube-dns
	I1213 10:21:38.128632   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:38.128690   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:38.128750   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:38.128750   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:38.128808   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:38.128808   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:38.128846   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:38.128846   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:38.128880   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:38.128880   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:38.128922   12636 retry.go:31] will retry after 2.780548203s: missing components: kube-dns
	W1213 10:21:36.118362    8676 node_ready.go:57] node "custom-flannel-416400" has "Ready":"False" status (will retry)
	I1213 10:21:37.119153    8676 node_ready.go:49] node "custom-flannel-416400" is "Ready"
	I1213 10:21:37.119153    8676 node_ready.go:38] duration metric: took 15.007907s for node "custom-flannel-416400" to be "Ready" ...
	I1213 10:21:37.119153    8676 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:21:37.127364    8676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:21:37.150210    8676 api_server.go:72] duration metric: took 16.1654516s to wait for apiserver process to appear ...
	I1213 10:21:37.150250    8676 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:21:37.150283    8676 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53686/healthz ...
	I1213 10:21:37.160127    8676 api_server.go:279] https://127.0.0.1:53686/healthz returned 200:
	ok
	I1213 10:21:37.162124    8676 api_server.go:141] control plane version: v1.34.2
	I1213 10:21:37.162124    8676 api_server.go:131] duration metric: took 11.8745ms to wait for apiserver health ...
	I1213 10:21:37.162124    8676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:21:37.167119    8676 system_pods.go:59] 7 kube-system pods found
	I1213 10:21:37.167119    8676 system_pods.go:61] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:37.167119    8676 system_pods.go:61] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:37.167119    8676 system_pods.go:61] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:37.167119    8676 system_pods.go:61] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:37.167119    8676 system_pods.go:61] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:37.167119    8676 system_pods.go:61] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:37.167119    8676 system_pods.go:61] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:37.167119    8676 system_pods.go:74] duration metric: took 4.9951ms to wait for pod list to return data ...
	I1213 10:21:37.167119    8676 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:21:37.172032    8676 default_sa.go:45] found service account: "default"
	I1213 10:21:37.172032    8676 default_sa.go:55] duration metric: took 4.9122ms for default service account to be created ...
	I1213 10:21:37.172032    8676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:21:37.179410    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:37.179410    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:37.179410    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:37.179410    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:37.179410    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:37.179410    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:37.179410    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:37.179410    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:37.179410    8676 retry.go:31] will retry after 231.026387ms: missing components: kube-dns
	I1213 10:21:37.418843    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:37.418843    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:37.419134    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:37.419178    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:37.419178    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:37.419178    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:37.419178    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:37.419178    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:37.419178    8676 retry.go:31] will retry after 291.052084ms: missing components: kube-dns
	I1213 10:21:37.719751    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:37.719751    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:37.719751    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:37.719751    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:37.719751    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:37.719751    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:37.719751    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:37.719751    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:37.719751    8676 retry.go:31] will retry after 434.04287ms: missing components: kube-dns
	I1213 10:21:38.214651    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:38.214651    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:38.214651    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:38.214651    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:38.214651    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:38.214651    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:38.214651    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:38.214651    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:38.214651    8676 retry.go:31] will retry after 567.902352ms: missing components: kube-dns
	I1213 10:21:38.789787    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:38.789787    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:38.789787    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:38.789787    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:38.789787    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:38.789787    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:38.789787    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:38.789787    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:38.789787    8676 retry.go:31] will retry after 542.338735ms: missing components: kube-dns
	I1213 10:21:39.524519    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:39.524519    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:39.524519    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:39.524519    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:39.524519    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:39.524519    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:39.525519    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:39.525519    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:21:39.525519    8676 retry.go:31] will retry after 808.403967ms: missing components: kube-dns
	I1213 10:21:40.341750    8676 system_pods.go:86] 7 kube-system pods found
	I1213 10:21:40.341750    8676 system_pods.go:89] "coredns-66bc5c9577-5kpz8" [84eb0869-60c3-491f-bc58-ec6bd9252cbb] Running
	I1213 10:21:40.341848    8676 system_pods.go:89] "etcd-custom-flannel-416400" [acadea55-159a-478c-858b-1a7d757919de] Running
	I1213 10:21:40.341867    8676 system_pods.go:89] "kube-apiserver-custom-flannel-416400" [74324d26-9e2b-4e4f-ad70-079287fc6545] Running
	I1213 10:21:40.341893    8676 system_pods.go:89] "kube-controller-manager-custom-flannel-416400" [3a70cee2-8b88-46df-86e8-961478cc9125] Running
	I1213 10:21:40.341893    8676 system_pods.go:89] "kube-proxy-55qzw" [bef351ea-9153-4ae2-879b-4ffad401e796] Running
	I1213 10:21:40.341893    8676 system_pods.go:89] "kube-scheduler-custom-flannel-416400" [2a327704-924d-4549-b444-3719dca2fbd6] Running
	I1213 10:21:40.341893    8676 system_pods.go:89] "storage-provisioner" [081dca84-4bfa-43b4-9031-2d4fc040f569] Running
	I1213 10:21:40.341893    8676 system_pods.go:126] duration metric: took 3.1698158s to wait for k8s-apps to be running ...
	I1213 10:21:40.341893    8676 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:21:40.346737    8676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:21:40.368528    8676 system_svc.go:56] duration metric: took 26.6342ms WaitForService to wait for kubelet
	I1213 10:21:40.368528    8676 kubeadm.go:587] duration metric: took 19.383723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:21:40.368528    8676 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:21:40.376936    8676 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:21:40.376994    8676 node_conditions.go:123] node cpu capacity is 16
	I1213 10:21:40.377049    8676 node_conditions.go:105] duration metric: took 8.5216ms to run NodePressure ...
	I1213 10:21:40.377049    8676 start.go:242] waiting for startup goroutines ...
	I1213 10:21:40.377049    8676 start.go:247] waiting for cluster config update ...
	I1213 10:21:40.377049    8676 start.go:256] writing updated cluster config ...
	I1213 10:21:40.382218    8676 ssh_runner.go:195] Run: rm -f paused
	I1213 10:21:40.390187    8676 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:21:40.398704    8676 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5kpz8" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.409147    8676 pod_ready.go:94] pod "coredns-66bc5c9577-5kpz8" is "Ready"
	I1213 10:21:40.409147    8676 pod_ready.go:86] duration metric: took 10.4436ms for pod "coredns-66bc5c9577-5kpz8" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.414148    8676 pod_ready.go:83] waiting for pod "etcd-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.424167    8676 pod_ready.go:94] pod "etcd-custom-flannel-416400" is "Ready"
	I1213 10:21:40.424167    8676 pod_ready.go:86] duration metric: took 10.0181ms for pod "etcd-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.429153    8676 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.438141    8676 pod_ready.go:94] pod "kube-apiserver-custom-flannel-416400" is "Ready"
	I1213 10:21:40.438141    8676 pod_ready.go:86] duration metric: took 8.9883ms for pod "kube-apiserver-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.442141    8676 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.799742    8676 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-416400" is "Ready"
	I1213 10:21:40.799742    8676 pod_ready.go:86] duration metric: took 357.596ms for pod "kube-controller-manager-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:40.999275    8676 pod_ready.go:83] waiting for pod "kube-proxy-55qzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:41.399546    8676 pod_ready.go:94] pod "kube-proxy-55qzw" is "Ready"
	I1213 10:21:41.399632    8676 pod_ready.go:86] duration metric: took 400.2297ms for pod "kube-proxy-55qzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:41.599393    8676 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:42.010586    8676 pod_ready.go:94] pod "kube-scheduler-custom-flannel-416400" is "Ready"
	I1213 10:21:42.010586    8676 pod_ready.go:86] duration metric: took 411.1875ms for pod "kube-scheduler-custom-flannel-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:21:42.010657    8676 pod_ready.go:40] duration metric: took 1.6204472s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:21:42.129519    8676 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 10:21:42.132024    8676 out.go:179] * Done! kubectl is now configured to use "custom-flannel-416400" cluster and "default" namespace by default
	I1213 10:21:40.917762   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:40.917762   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:40.917762   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:40.917762   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:40.917762   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:40.917762   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:40.917762   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:40.917762   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:40.917762   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:40.917762   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:40.917762   12636 retry.go:31] will retry after 2.36117669s: missing components: kube-dns
	I1213 10:21:43.299827   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:43.299827   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:43.299827   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:43.299827   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:43.300842   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:43.300842   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:43.300842   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:43.300842   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:43.300842   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:43.300842   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:43.300842   12636 retry.go:31] will retry after 2.856874825s: missing components: kube-dns
	W1213 10:21:40.530803    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:46.200658   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:46.200658   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:46.200658   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:46.200658   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:46.200658   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:46.200658   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:46.200658   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:46.200658   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:46.200658   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:46.200658   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:46.200658   12636 retry.go:31] will retry after 5.315717692s: missing components: kube-dns
	I1213 10:21:45.449630    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:21:45.554635    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:45.554635    8468 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:21:51.524807   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:51.524898   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:51.524898   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:51.524898   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:51.524898   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:51.524898   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:51.524898   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:51.524995   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:51.524995   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:51.524995   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:51.525060   12636 retry.go:31] will retry after 4.835789341s: missing components: kube-dns
	W1213 10:21:50.566639    8468 node_ready.go:55] error getting node "no-preload-803600" condition "Ready" status (will retry): Get "https://127.0.0.1:53494/api/v1/nodes/no-preload-803600": EOF
	I1213 10:21:56.526618    8468 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:21:56.624189    8468 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:21:56.624728    8468 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:21:56.630182    8468 out.go:179] * Enabled addons: 
	I1213 10:21:56.367981   12636 system_pods.go:86] 9 kube-system pods found
	I1213 10:21:56.367981   12636 system_pods.go:89] "calico-kube-controllers-5c676f698c-bq78v" [c4e042ef-66b5-4089-91df-27b25d8fd24d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 10:21:56.367981   12636 system_pods.go:89] "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 10:21:56.367981   12636 system_pods.go:89] "coredns-66bc5c9577-lxr5q" [a495a611-6727-4a9f-9593-1037cf8ef095] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:21:56.367981   12636 system_pods.go:89] "etcd-calico-416400" [56e5dcd4-31ca-46f1-8b54-5e8c38c6746f] Running
	I1213 10:21:56.367981   12636 system_pods.go:89] "kube-apiserver-calico-416400" [2f08bf5c-ceca-4d42-a797-8077aa27d4f5] Running
	I1213 10:21:56.367981   12636 system_pods.go:89] "kube-controller-manager-calico-416400" [38ba20e5-95e2-4d29-9e60-a336c23a211f] Running
	I1213 10:21:56.367981   12636 system_pods.go:89] "kube-proxy-chspq" [6a344fee-c061-4fb8-9de2-201fc2381499] Running
	I1213 10:21:56.367981   12636 system_pods.go:89] "kube-scheduler-calico-416400" [a219bd5f-738d-4144-b4b3-98f17a066814] Running
	I1213 10:21:56.367981   12636 system_pods.go:89] "storage-provisioner" [95ac33a1-5b4c-43c9-9cdf-d186e55eb6b7] Running
	I1213 10:21:56.367981   12636 retry.go:31] will retry after 8.597320777s: missing components: kube-dns
	I1213 10:21:56.634860    8468 addons.go:530] duration metric: took 1m56.9218745s for enable addons: enabled=[]
	
	
	==> Docker <==
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116462064Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116551372Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116562473Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116569874Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116575674Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116598777Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.116638580Z" level=info msg="Initializing buildkit"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.245496763Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260353344Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260701677Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260766383Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:11:48 newest-cni-307000 dockerd[1196]: time="2025-12-13T10:11:48.260786285Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:11:48 newest-cni-307000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:11:49 newest-cni-307000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:11:49 newest-cni-307000 cri-dockerd[1489]: time="2025-12-13T10:11:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:11:49 newest-cni-307000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:22:02.278862   12908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:22:02.280163   12908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:22:02.281514   12908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:22:02.283009   12908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:22:02.284908   12908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.503579] CPU: 8 PID: 426388 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f2706b18b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f2706b18af6.
	[  +0.000000] RSP: 002b:00007ffc13436bf0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.831312] CPU: 4 PID: 426544 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe875bcbb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fe875bcbaf6.
	[  +0.000001] RSP: 002b:00007ffc572b7090 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +2.213464] tmpfs: Unknown parameter 'noswap'
	[Dec13 10:21] tmpfs: Unknown parameter 'noswap'
	[  +0.368629] tmpfs: Unknown parameter 'noswap'
	[  +9.494409] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:22:02 up  1:58,  0 user,  load average: 5.85, 4.04, 3.55
	Linux newest-cni-307000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:59 newest-cni-307000 kubelet[12731]: E1213 10:21:59.945116   12731 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:21:59 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:22:00 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 10:22:00 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:00 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:00 newest-cni-307000 kubelet[12757]: E1213 10:22:00.702759   12757 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:22:00 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:22:00 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:22:01 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 10:22:01 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:01 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:01 newest-cni-307000 kubelet[12785]: E1213 10:22:01.469447   12785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:22:01 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:22:01 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:22:02 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 13 10:22:02 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:02 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:22:02 newest-cni-307000 kubelet[12876]: E1213 10:22:02.207921   12876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:22:02 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:22:02 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 6 (593.5915ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:22:03.514895    7056 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-307000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-307000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (123.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (380.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m15.542234s)

                                                
                                                
-- stdout --
	* [newest-cni-307000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-307000" primary control-plane node in "newest-cni-307000" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:22:06.105485    5404 out.go:360] Setting OutFile to fd 1408 ...
	I1213 10:22:06.151478    5404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:06.151478    5404 out.go:374] Setting ErrFile to fd 516...
	I1213 10:22:06.151478    5404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:06.167207    5404 out.go:368] Setting JSON to false
	I1213 10:22:06.170130    5404 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7133,"bootTime":1765614192,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:22:06.170130    5404 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:22:06.174454    5404 out.go:179] * [newest-cni-307000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:22:06.177854    5404 notify.go:221] Checking for updates...
	I1213 10:22:06.179722    5404 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:22:06.182257    5404 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:22:06.185745    5404 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:22:06.201400    5404 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:22:06.204400    5404 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:22:06.206392    5404 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:22:06.207391    5404 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:22:06.318389    5404 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:22:06.323391    5404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:06.566310    5404 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:96 SystemTime:2025-12-13 10:22:06.545331674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:22:06.570316    5404 out.go:179] * Using the docker driver based on existing profile
	I1213 10:22:06.576312    5404 start.go:309] selected driver: docker
	I1213 10:22:06.576312    5404 start.go:927] validating driver "docker" against &{Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:06.576312    5404 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:22:06.684323    5404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:06.944856    5404 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:22:06.91866674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:22:06.944856    5404 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:22:06.944856    5404 cni.go:84] Creating CNI manager for ""
	I1213 10:22:06.944856    5404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:22:06.944856    5404 start.go:353] cluster config:
	{Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:06.948868    5404 out.go:179] * Starting "newest-cni-307000" primary control-plane node in "newest-cni-307000" cluster
	I1213 10:22:06.952872    5404 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:22:06.954849    5404 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:22:06.957866    5404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:22:06.957866    5404 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:22:06.958847    5404 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 10:22:06.958847    5404 cache.go:65] Caching tarball of preloaded images
	I1213 10:22:06.958847    5404 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:22:06.958847    5404 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1213 10:22:06.958847    5404 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\config.json ...
	I1213 10:22:07.042864    5404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:22:07.042864    5404 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:22:07.042864    5404 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:22:07.042864    5404 start.go:360] acquireMachinesLock for newest-cni-307000: {Name:mkec1c80bf050de750404c276f94aaabab293332 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:22:07.042864    5404 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-307000"
	I1213 10:22:07.042864    5404 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:22:07.042864    5404 fix.go:54] fixHost starting: 
	I1213 10:22:07.049849    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:07.110855    5404 fix.go:112] recreateIfNeeded on newest-cni-307000: state=Stopped err=<nil>
	W1213 10:22:07.110855    5404 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:22:07.113850    5404 out.go:252] * Restarting existing docker container for "newest-cni-307000" ...
	I1213 10:22:07.117865    5404 cli_runner.go:164] Run: docker start newest-cni-307000
	I1213 10:22:07.711562    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:07.767545    5404 kic.go:430] container "newest-cni-307000" state is running.
	I1213 10:22:07.772545    5404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:22:07.825552    5404 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\config.json ...
	I1213 10:22:07.827552    5404 machine.go:94] provisionDockerMachine start ...
	I1213 10:22:07.831554    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:07.890548    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:07.891569    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:07.891569    5404 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:22:07.894563    5404 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:22:11.078393    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-307000
	
	I1213 10:22:11.078393    5404 ubuntu.go:182] provisioning hostname "newest-cni-307000"
	I1213 10:22:11.084385    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:11.158383    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:11.159388    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:11.159388    5404 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-307000 && echo "newest-cni-307000" | sudo tee /etc/hostname
	I1213 10:22:11.363020    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-307000
	
	I1213 10:22:11.367017    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:11.427019    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:11.427019    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:11.427019    5404 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-307000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-307000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-307000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:22:11.600030    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:22:11.600030    5404 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:22:11.600030    5404 ubuntu.go:190] setting up certificates
	I1213 10:22:11.600030    5404 provision.go:84] configureAuth start
	I1213 10:22:11.605037    5404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:22:11.662035    5404 provision.go:143] copyHostCerts
	I1213 10:22:11.662035    5404 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:22:11.662035    5404 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:22:11.662035    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:22:11.663023    5404 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:22:11.663023    5404 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:22:11.663023    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:22:11.664023    5404 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:22:11.664023    5404 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:22:11.664023    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:22:11.665032    5404 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-307000 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-307000]
	I1213 10:22:11.898386    5404 provision.go:177] copyRemoteCerts
	I1213 10:22:11.907388    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:22:11.912373    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:11.971377    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:12.093387    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:22:12.125379    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:22:12.154386    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:22:12.182385    5404 provision.go:87] duration metric: took 582.3467ms to configureAuth
	I1213 10:22:12.183373    5404 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:22:12.183373    5404 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:22:12.186383    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:12.244374    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:12.244374    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:12.244374    5404 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:22:12.413384    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:22:12.413384    5404 ubuntu.go:71] root file system type: overlay
	I1213 10:22:12.413384    5404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:22:12.418473    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:12.473470    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:12.474473    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:12.474473    5404 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:22:12.654488    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:22:12.658484    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:12.716479    5404 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:12.716479    5404 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 53942 <nil> <nil>}
	I1213 10:22:12.716479    5404 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:22:12.897486    5404 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:22:12.897486    5404 machine.go:97] duration metric: took 5.0698616s to provisionDockerMachine
	I1213 10:22:12.897486    5404 start.go:293] postStartSetup for "newest-cni-307000" (driver="docker")
	I1213 10:22:12.897486    5404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:22:12.905486    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:22:12.910484    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:12.969485    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:13.112493    5404 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:22:13.120487    5404 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:22:13.120487    5404 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:22:13.120487    5404 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:22:13.120487    5404 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:22:13.121487    5404 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:22:13.125490    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:22:13.141491    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:22:13.175486    5404 start.go:296] duration metric: took 277.996ms for postStartSetup
	I1213 10:22:13.179481    5404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:22:13.183489    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:13.256489    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:13.382482    5404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:22:13.390505    5404 fix.go:56] duration metric: took 6.3475494s for fixHost
	I1213 10:22:13.390505    5404 start.go:83] releasing machines lock for "newest-cni-307000", held for 6.3475494s
	I1213 10:22:13.394504    5404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-307000
	I1213 10:22:13.451490    5404 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:22:13.455495    5404 ssh_runner.go:195] Run: cat /version.json
	I1213 10:22:13.456500    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:13.459494    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:13.510492    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:13.513497    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	W1213 10:22:13.636493    5404 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:22:13.653496    5404 ssh_runner.go:195] Run: systemctl --version
	I1213 10:22:13.669487    5404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:22:13.678510    5404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:22:13.683496    5404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:22:13.699493    5404 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:22:13.699493    5404 start.go:496] detecting cgroup driver to use...
	I1213 10:22:13.699493    5404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:22:13.700494    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:22:13.730529    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1213 10:22:13.739497    5404 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:22:13.739497    5404 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:22:13.751497    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:22:13.768503    5404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:22:13.773488    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:22:13.793495    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:22:13.821495    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:22:13.847491    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:22:13.871499    5404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:22:13.894500    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:22:13.919517    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:22:13.944505    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:22:13.965509    5404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:22:13.982491    5404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:22:13.999493    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:14.167217    5404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:22:14.310331    5404 start.go:496] detecting cgroup driver to use...
	I1213 10:22:14.310331    5404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:22:14.316340    5404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:22:14.343329    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:22:14.365336    5404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:22:15.280133    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:22:15.317093    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:22:15.349166    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:22:15.376159    5404 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:22:15.387163    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:22:15.403196    5404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1213 10:22:15.431193    5404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:22:15.574455    5404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:22:15.733084    5404 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:22:15.733084    5404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:22:15.757063    5404 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:22:15.780077    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:15.938650    5404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:22:16.866885    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:22:16.888883    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:22:16.917887    5404 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1213 10:22:16.953774    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:22:16.975774    5404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:22:17.128013    5404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:22:17.298027    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:17.480215    5404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:22:17.516197    5404 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:22:17.547202    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:17.735206    5404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:22:17.901201    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:22:17.935214    5404 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:22:17.941212    5404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:22:17.952616    5404 start.go:564] Will wait 60s for crictl version
	I1213 10:22:17.957608    5404 ssh_runner.go:195] Run: which crictl
	I1213 10:22:17.969608    5404 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:22:18.026625    5404 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:22:18.031616    5404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:22:18.097616    5404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:22:18.156612    5404 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1213 10:22:18.161612    5404 cli_runner.go:164] Run: docker exec -t newest-cni-307000 dig +short host.docker.internal
	I1213 10:22:18.338619    5404 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:22:18.344608    5404 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:22:18.352610    5404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:22:18.385627    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:18.455618    5404 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:22:18.457621    5404 kubeadm.go:884] updating cluster {Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:22:18.457621    5404 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 10:22:18.462620    5404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:22:18.507621    5404 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:22:18.507621    5404 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:22:18.512627    5404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:22:18.556614    5404 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:22:18.556614    5404 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:22:18.556614    5404 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1213 10:22:18.556614    5404 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-307000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:22:18.561623    5404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:22:18.678617    5404 cni.go:84] Creating CNI manager for ""
	I1213 10:22:18.678617    5404 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 10:22:18.678617    5404 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:22:18.678617    5404 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-307000 NodeName:newest-cni-307000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:22:18.678617    5404 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-307000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:22:18.683617    5404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:22:18.700626    5404 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:22:18.707625    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:22:18.730624    5404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 10:22:18.758615    5404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:22:18.779614    5404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1213 10:22:18.815617    5404 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:22:18.825626    5404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:22:18.852622    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:19.053632    5404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:22:19.077625    5404 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000 for IP: 192.168.76.2
	I1213 10:22:19.077625    5404 certs.go:195] generating shared ca certs ...
	I1213 10:22:19.077625    5404 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:19.078628    5404 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:22:19.078628    5404 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:22:19.078628    5404 certs.go:257] generating profile certs ...
	I1213 10:22:19.079632    5404 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\client.key
	I1213 10:22:19.079632    5404 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key.1d6632be
	I1213 10:22:19.080629    5404 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key
	I1213 10:22:19.080629    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:22:19.081621    5404 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:22:19.081621    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:22:19.081621    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:22:19.081621    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:22:19.081621    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:22:19.082624    5404 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:22:19.083633    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:22:19.120623    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:22:19.152629    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:22:19.207624    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:22:19.246620    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:22:19.276627    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:22:19.311640    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:22:19.353627    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-307000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:22:19.387646    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:22:19.439630    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:22:19.484179    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:22:19.524210    5404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:22:19.551174    5404 ssh_runner.go:195] Run: openssl version
	I1213 10:22:19.566169    5404 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:22:19.584171    5404 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:22:19.608175    5404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:22:19.617189    5404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:22:19.624182    5404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:22:19.684182    5404 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:22:19.706186    5404 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:19.731212    5404 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:22:19.753185    5404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:19.760184    5404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:19.764192    5404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:19.816159    5404 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:22:19.833682    5404 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:22:19.849691    5404 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:22:19.866684    5404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:22:19.874684    5404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:22:19.878681    5404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:22:19.953950    5404 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:22:19.974874    5404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:22:19.987453    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:22:20.040025    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:22:20.095030    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:22:20.171968    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:22:20.234971    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:22:20.284972    5404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:22:20.342737    5404 kubeadm.go:401] StartCluster: {Name:newest-cni-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:20.346571    5404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:22:20.384195    5404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:22:20.397188    5404 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:22:20.397188    5404 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:22:20.401182    5404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:22:20.414183    5404 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:22:20.417178    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:20.474178    5404 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-307000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:22:20.474178    5404 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-307000" cluster setting kubeconfig missing "newest-cni-307000" context setting]
	I1213 10:22:20.475180    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:20.495765    5404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:22:20.509756    5404 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1213 10:22:20.509756    5404 kubeadm.go:602] duration metric: took 112.5667ms to restartPrimaryControlPlane
	I1213 10:22:20.509756    5404 kubeadm.go:403] duration metric: took 167.0168ms to StartCluster
	I1213 10:22:20.509756    5404 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:20.510764    5404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:22:20.511756    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:20.512761    5404 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:22:20.512761    5404 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:22:20.512761    5404 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-307000"
	I1213 10:22:20.512761    5404 addons.go:70] Setting dashboard=true in profile "newest-cni-307000"
	I1213 10:22:20.512761    5404 addons.go:239] Setting addon dashboard=true in "newest-cni-307000"
	I1213 10:22:20.512761    5404 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	W1213 10:22:20.512761    5404 addons.go:248] addon dashboard should already be in state true
	I1213 10:22:20.512761    5404 addons.go:70] Setting default-storageclass=true in profile "newest-cni-307000"
	I1213 10:22:20.512761    5404 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-307000"
	I1213 10:22:20.512761    5404 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-307000"
	I1213 10:22:20.512761    5404 host.go:66] Checking if "newest-cni-307000" exists ...
	I1213 10:22:20.513759    5404 host.go:66] Checking if "newest-cni-307000" exists ...
	I1213 10:22:20.518770    5404 out.go:179] * Verifying Kubernetes components...
	I1213 10:22:20.524759    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:20.525765    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:20.525765    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:20.526758    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:20.587755    5404 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:22:20.587755    5404 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:22:20.590762    5404 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:20.590762    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:22:20.595775    5404 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:22:20.595775    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:20.597765    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:22:20.597765    5404 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:22:20.603768    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:20.604768    5404 addons.go:239] Setting addon default-storageclass=true in "newest-cni-307000"
	I1213 10:22:20.604768    5404 host.go:66] Checking if "newest-cni-307000" exists ...
	I1213 10:22:20.617768    5404 cli_runner.go:164] Run: docker container inspect newest-cni-307000 --format={{.State.Status}}
	I1213 10:22:20.666017    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:20.666017    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:20.680994    5404 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:22:20.680994    5404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:22:20.683989    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:20.746022    5404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-307000\id_rsa Username:docker}
	I1213 10:22:20.755004    5404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:22:20.808577    5404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-307000
	I1213 10:22:20.833592    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:20.836566    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:22:20.836566    5404 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:22:20.871560    5404 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:22:20.876566    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:20.906584    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:22:20.906584    5404 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:22:20.935577    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:22:20.935577    5404 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:22:21.009586    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:22:21.010586    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:22:21.010586    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1213 10:22:21.026580    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.026580    5404 retry.go:31] will retry after 315.754685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.035568    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:22:21.035568    5404 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:22:21.092576    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:22:21.092576    5404 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:22:21.115581    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:22:21.116579    5404 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1213 10:22:21.138583    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.138583    5404 retry.go:31] will retry after 198.805302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.139587    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:22:21.139587    5404 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:22:21.160577    5404 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:22:21.160577    5404 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:22:21.185570    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:21.280002    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.280002    5404 retry.go:31] will retry after 181.569886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.341069    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:22:21.347079    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:21.377188    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:21.425188    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.425188    5404 retry.go:31] will retry after 326.423032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:22:21.431187    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.431187    5404 retry.go:31] will retry after 526.62129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.465790    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:21.557799    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.557799    5404 retry.go:31] will retry after 219.53354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.757411    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:22:21.782428    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:21.851596    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.851596    5404 retry.go:31] will retry after 746.053815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:22:21.874432    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.874432    5404 retry.go:31] will retry after 392.923692ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:21.876066    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:21.962076    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:22.045088    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.045088    5404 retry.go:31] will retry after 778.403796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.273085    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:22.361102    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.361102    5404 retry.go:31] will retry after 1.081039321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.377080    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:22.605371    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:22.705766    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.705801    5404 retry.go:31] will retry after 1.202454805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.828612    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:22.878967    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:22.929615    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:22.929615    5404 retry.go:31] will retry after 847.629907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.377579    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:23.448359    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:23.535180    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.535180    5404 retry.go:31] will retry after 1.570703412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.781812    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:23.866365    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.866365    5404 retry.go:31] will retry after 1.235333468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.877363    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:23.914392    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:23.996363    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:23.996363    5404 retry.go:31] will retry after 1.798081775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:24.377137    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:24.877314    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:25.105626    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:25.112245    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:25.201662    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:25.201662    5404 retry.go:31] will retry after 1.320114812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:22:25.209641    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:25.209641    5404 retry.go:31] will retry after 1.691134953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:25.377246    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:25.799001    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:22:25.879048    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:25.905639    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:25.905639    5404 retry.go:31] will retry after 1.772748394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:26.377559    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:26.526541    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:26.623628    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:26.623672    5404 retry.go:31] will retry after 2.564120356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:26.878512    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:26.909082    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:26.998078    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:26.998078    5404 retry.go:31] will retry after 3.871823518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:27.377555    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:27.682959    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:27.764966    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:27.764966    5404 retry.go:31] will retry after 1.899058803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:27.878155    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:28.376932    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:28.877896    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:29.194711    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:29.283422    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:29.283422    5404 retry.go:31] will retry after 3.284994473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:29.376803    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:29.668248    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:29.747248    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:29.747248    5404 retry.go:31] will retry after 2.230030954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:29.877580    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:30.376193    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:30.875324    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:22:30.878332    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:30.968324    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:30.968324    5404 retry.go:31] will retry after 4.563632619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:31.378842    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:31.876960    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:31.982205    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:32.075343    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:32.075343    5404 retry.go:31] will retry after 8.821938506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:32.377837    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:32.577091    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:32.677305    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:32.677305    5404 retry.go:31] will retry after 9.183598884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:32.877705    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:33.377362    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:33.878088    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:34.377151    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:34.878503    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:35.376936    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:35.536694    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:35.637005    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:35.637005    5404 retry.go:31] will retry after 8.520160428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:35.877338    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:36.378175    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:36.877009    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:37.377058    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:37.877696    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:38.378443    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:38.877099    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:39.377378    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:39.876582    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:40.376689    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:40.876720    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:40.902758    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:40.989732    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:40.989732    5404 retry.go:31] will retry after 11.761287036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:41.377662    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:41.867127    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:22:41.877733    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:41.973874    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:41.973874    5404 retry.go:31] will retry after 11.773412872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:42.377629    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:42.876905    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:43.376587    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:43.877605    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:44.161932    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:22:44.257937    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:44.257937    5404 retry.go:31] will retry after 12.541452897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:44.381203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:44.879259    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:45.377606    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:45.877498    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:46.378624    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:46.877877    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:47.378972    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:47.877614    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:48.376509    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:48.883557    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:49.378367    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:49.877642    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:50.379234    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:50.877811    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:51.377552    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:51.878966    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:52.378205    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:52.756073    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:22:52.838705    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:52.838705    5404 retry.go:31] will retry after 17.207837011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:52.877629    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:53.378120    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:53.753136    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:22:53.850245    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:53.850347    5404 retry.go:31] will retry after 21.151356602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:53.878833    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:54.379165    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:54.878640    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:55.376293    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:55.878587    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:56.376887    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:56.804180    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:22:56.881427    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:22:56.912408    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:56.912408    5404 retry.go:31] will retry after 9.814052413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:22:57.379639    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:57.878210    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:58.378752    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:58.877729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:59.377846    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:22:59.879328    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:00.377731    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:00.877718    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:01.377319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:01.878199    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:02.378205    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:02.877482    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:03.378234    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:03.879726    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:04.378272    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:04.878668    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:05.379671    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:05.878576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:06.386269    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:06.731680    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:23:06.829889    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:06.829990    5404 retry.go:31] will retry after 15.539058256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:06.877854    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:07.380474    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:07.880323    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:08.378230    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:08.877776    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:09.378281    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:09.879546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:10.051220    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:23:10.152558    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:10.152558    5404 retry.go:31] will retry after 20.919822559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:10.376415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:10.877445    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:11.378154    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:11.879569    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:12.379965    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:12.878281    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:13.378821    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:13.878082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:14.377353    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:14.877340    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:15.007933    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:23:15.110406    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:15.110406    5404 retry.go:31] will retry after 18.779740146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:15.378478    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:15.878334    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:16.378776    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:16.881316    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:17.378957    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:17.878043    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:18.378528    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:18.879787    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:19.380098    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:19.878902    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:20.379840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:20.877270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:20.907859    5404 logs.go:282] 0 containers: []
	W1213 10:23:20.907859    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:20.912007    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:20.942566    5404 logs.go:282] 0 containers: []
	W1213 10:23:20.942566    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:20.946779    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:20.976806    5404 logs.go:282] 0 containers: []
	W1213 10:23:20.976806    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:20.980715    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:21.015115    5404 logs.go:282] 0 containers: []
	W1213 10:23:21.015115    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:21.019335    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:21.051123    5404 logs.go:282] 0 containers: []
	W1213 10:23:21.051180    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:21.055192    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:21.091603    5404 logs.go:282] 0 containers: []
	W1213 10:23:21.091603    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:21.095325    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:21.121828    5404 logs.go:282] 0 containers: []
	W1213 10:23:21.121828    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:21.125935    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:21.161034    5404 logs.go:282] 0 containers: []
	W1213 10:23:21.161066    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:21.161066    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:21.161118    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:21.219674    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:21.219674    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:21.290836    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:21.290836    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:21.365834    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:21.365834    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:21.424177    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:21.424177    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:21.553845    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:21.545349    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.546525    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.547608    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.548533    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.550817    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:21.545349    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.546525    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.547608    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.548533    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:21.550817    3456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:22.374033    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:23:22.456366    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:22.456366    5404 retry.go:31] will retry after 44.153371634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:24.060351    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:24.086359    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:24.123549    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.123549    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:24.128551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:24.162112    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.162112    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:24.168196    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:24.207096    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.207096    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:24.213071    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:24.249366    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.249366    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:24.253353    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:24.283787    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.283787    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:24.288915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:24.328545    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.328545    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:24.332575    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:24.364718    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.364718    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:24.369722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:24.409312    5404 logs.go:282] 0 containers: []
	W1213 10:23:24.409312    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:24.409312    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:24.409312    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:24.501373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:24.493271    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.494556    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.495491    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.496861    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.498035    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:24.493271    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.494556    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.495491    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.496861    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:24.498035    3608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:24.501373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:24.501373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:24.536065    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:24.536065    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:24.590995    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:24.591550    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:24.670655    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:24.670655    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:27.215711    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:27.237493    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:27.268796    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.268796    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:27.271791    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:27.302800    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.302800    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:27.305807    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:27.336219    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.336219    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:27.340046    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:27.373262    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.373262    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:27.378253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:27.412250    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.412250    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:27.416243    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:27.450258    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.450258    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:27.453252    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:27.483244    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.483244    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:27.486242    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:27.520448    5404 logs.go:282] 0 containers: []
	W1213 10:23:27.520448    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:27.520448    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:27.520448    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:27.584064    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:27.585069    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:27.622076    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:27.622076    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:27.707845    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:27.697636    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.699324    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.700563    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.701703    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.703174    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:27.697636    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.699324    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.700563    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.701703    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:27.703174    3780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:27.708842    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:27.708842    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:27.736199    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:27.736199    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:30.293005    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:30.318354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:30.353641    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.353675    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:30.357612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:30.396727    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.396831    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:30.400846    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:30.442045    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.442045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:30.446021    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:30.481487    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.481487    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:30.489802    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:30.525385    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.525385    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:30.528392    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:30.559376    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.559376    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:30.562379    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:30.593383    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.593383    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:30.597378    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:30.627409    5404 logs.go:282] 0 containers: []
	W1213 10:23:30.627409    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:30.627409    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:30.627409    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:30.692384    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:30.692384    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:30.729077    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:30.729077    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:30.819679    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:30.809837    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.810910    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.812153    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.813459    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.814610    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:30.809837    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.810910    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.812153    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.813459    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:30.814610    3940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:30.819679    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:30.819679    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:30.852539    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:30.852539    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:31.078388    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:23:31.188218    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:31.188218    5404 retry.go:31] will retry after 40.052116827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:33.410897    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:33.433093    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:33.470766    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.470766    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:33.474760    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:33.505762    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.505762    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:33.509768    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:33.541774    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.541774    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:33.545757    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:33.576756    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.576756    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:33.579753    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:33.611772    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.611772    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:33.615760    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:33.644758    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.644758    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:33.647756    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:33.676799    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.676799    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:33.680892    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:33.714096    5404 logs.go:282] 0 containers: []
	W1213 10:23:33.714096    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:33.714096    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:33.714096    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:33.780760    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:33.780836    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:33.827737    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:33.827737    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 10:23:33.897329    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:23:33.927339    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:33.915948    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.917581    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.919021    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.920231    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.921509    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:33.915948    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.917581    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.919021    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.920231    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:33.921509    4122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:33.927339    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:33.927339    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1213 10:23:33.994570    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:33.994570    5404 retry.go:31] will retry after 36.396432767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:23:34.003569    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:34.003569    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:36.566075    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:36.591980    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:36.625504    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.625504    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:36.628501    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:36.656749    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.656749    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:36.660532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:36.690758    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.690758    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:36.697629    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:36.732397    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.732397    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:36.736434    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:36.766092    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.766092    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:36.770304    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:36.804639    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.804639    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:36.811611    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:36.843134    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.843134    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:36.848123    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:36.879223    5404 logs.go:282] 0 containers: []
	W1213 10:23:36.879223    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:36.879223    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:36.879223    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:36.929316    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:36.929316    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:37.023535    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:37.008261    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.009303    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.010733    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.012066    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.013784    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:37.008261    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.009303    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.010733    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.012066    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:37.013784    4296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:37.023589    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:37.023624    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:37.055980    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:37.055980    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:37.116293    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:37.116293    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:39.692572    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:39.716256    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:39.748724    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.748724    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:39.752572    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:39.784645    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.784645    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:39.788875    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:39.820124    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.820124    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:39.823694    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:39.857048    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.857048    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:39.860995    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:39.890610    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.890610    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:39.895702    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:39.927186    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.927186    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:39.931320    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:39.963350    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.963350    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:39.968176    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:39.996423    5404 logs.go:282] 0 containers: []
	W1213 10:23:39.996423    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:39.996423    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:39.996423    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:40.064872    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:40.064872    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:40.104875    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:40.104875    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:40.205602    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:40.193156    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.195684    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.198332    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.199188    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.200393    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:40.193156    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.195684    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.198332    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.199188    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:40.200393    4471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:40.205602    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:40.205602    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:40.234560    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:40.234560    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:42.790704    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:42.816791    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:42.850901    5404 logs.go:282] 0 containers: []
	W1213 10:23:42.850901    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:42.854913    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:42.886573    5404 logs.go:282] 0 containers: []
	W1213 10:23:42.886573    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:42.890524    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:42.920052    5404 logs.go:282] 0 containers: []
	W1213 10:23:42.920052    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:42.924329    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:42.962345    5404 logs.go:282] 0 containers: []
	W1213 10:23:42.962345    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:42.966336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:43.002827    5404 logs.go:282] 0 containers: []
	W1213 10:23:43.002827    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:43.007090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:43.038280    5404 logs.go:282] 0 containers: []
	W1213 10:23:43.038280    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:43.041276    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:43.068798    5404 logs.go:282] 0 containers: []
	W1213 10:23:43.068798    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:43.072214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:43.111332    5404 logs.go:282] 0 containers: []
	W1213 10:23:43.111332    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:43.111332    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:43.111332    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:43.210352    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:43.199026    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.200145    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.201046    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.203901    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.205088    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:43.199026    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.200145    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.201046    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.203901    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:43.205088    4628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:43.210352    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:43.210352    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:43.238224    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:43.238224    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:43.287855    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:43.287855    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:43.362725    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:43.362725    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:45.914090    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:45.940326    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:45.973525    5404 logs.go:282] 0 containers: []
	W1213 10:23:45.973569    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:45.976819    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:46.010744    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.010744    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:46.015739    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:46.047723    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.047723    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:46.051757    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:46.080823    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.080823    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:46.084749    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:46.122913    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.122913    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:46.126907    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:46.161484    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.161484    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:46.164474    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:46.206477    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.206501    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:46.211498    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:46.241309    5404 logs.go:282] 0 containers: []
	W1213 10:23:46.241309    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:46.241309    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:46.241309    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:46.317107    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:46.317107    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:46.356941    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:46.356941    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:46.442702    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:46.431884    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.432588    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.434794    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.436016    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.436826    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:46.431884    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.432588    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.434794    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.436016    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:46.436826    4800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:46.442702    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:46.442702    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:46.470214    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:46.470214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:49.026346    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:49.048541    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:49.083981    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.083981    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:49.087235    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:49.125924    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.125967    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:49.130104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:49.166975    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.166975    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:49.171581    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:49.207353    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.207353    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:49.210359    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:49.240996    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.240996    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:49.247259    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:49.276596    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.276596    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:49.280401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:49.313484    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.313484    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:49.317444    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:49.350424    5404 logs.go:282] 0 containers: []
	W1213 10:23:49.350424    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:49.350424    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:49.350424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:49.412557    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:49.412557    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:49.453745    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:49.454754    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:49.547974    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:49.537856    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.538609    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.540744    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.541635    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.544298    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:49.537856    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.538609    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.540744    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.541635    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:49.544298    4957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:49.547974    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:49.547974    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:49.579464    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:49.579464    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:52.137450    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:52.163503    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:52.198358    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.198358    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:52.203347    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:52.238599    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.238599    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:52.242081    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:52.276941    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.276941    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:52.280442    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:52.309997    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.310046    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:52.313268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:52.345280    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.345280    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:52.349163    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:52.378172    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.378250    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:52.381825    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:52.412111    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.412187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:52.415734    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:52.447139    5404 logs.go:282] 0 containers: []
	W1213 10:23:52.447139    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:52.447139    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:52.447139    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:52.484056    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:52.484056    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:52.574308    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:52.562872    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.564013    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.565849    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.566936    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.568019    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:52.562872    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.564013    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.565849    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.566936    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:52.568019    5122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:52.574308    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:52.574308    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:52.606673    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:52.606673    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:52.661201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:52.661243    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:55.236299    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:55.258384    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:55.295565    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.295565    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:55.298986    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:55.334439    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.334439    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:55.338646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:55.370513    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.370513    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:55.374971    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:55.406319    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.406319    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:55.411544    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:55.445913    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.445913    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:55.449725    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:55.478651    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.478651    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:55.482750    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:55.515252    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.515252    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:55.520525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:55.553391    5404 logs.go:282] 0 containers: []
	W1213 10:23:55.553391    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:55.553391    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:55.553391    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:55.621270    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:55.621270    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:55.665216    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:55.665216    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:55.762075    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:55.751463    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.752402    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.755062    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.757370    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.758422    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:55.751463    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.752402    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.755062    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.757370    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:55.758422    5287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:55.762075    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:55.762125    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:55.789384    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:55.789434    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:23:58.377262    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:23:58.404238    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:23:58.442742    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.442742    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:23:58.447553    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:23:58.491266    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.491300    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:23:58.497475    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:23:58.534889    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.534889    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:23:58.538953    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:23:58.567042    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.567042    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:23:58.571075    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:23:58.606644    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.606644    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:23:58.610649    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:23:58.639905    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.639905    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:23:58.645192    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:23:58.673613    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.673613    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:23:58.677432    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:23:58.713234    5404 logs.go:282] 0 containers: []
	W1213 10:23:58.713234    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:23:58.713336    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:23:58.713336    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:23:58.778028    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:23:58.779027    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:23:58.826256    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:23:58.826256    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:23:58.928953    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:23:58.919613    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.920670    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.921643    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.924434    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.925215    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:23:58.919613    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.920670    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.921643    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.924434    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:23:58.925215    5448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:23:58.928953    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:23:58.928953    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:23:58.958942    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:23:58.958942    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:01.531922    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:01.559451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:01.590845    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.590845    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:01.595189    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:01.626236    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.626236    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:01.630712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:01.666702    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.666702    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:01.670614    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:01.701793    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.701793    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:01.706277    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:01.735977    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.735977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:01.739571    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:01.769956    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.769956    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:01.774143    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:01.806964    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.806964    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:01.810987    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:01.845584    5404 logs.go:282] 0 containers: []
	W1213 10:24:01.845611    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:01.845650    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:01.845686    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:01.925231    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:01.925231    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:01.972968    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:01.972968    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:02.065022    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:02.053894    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.055028    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.057184    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.058298    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.059056    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:02.053894    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.055028    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.057184    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.058298    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:02.059056    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:02.065022    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:02.065022    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:02.092021    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:02.092021    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:04.649752    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:04.673250    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:04.710300    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.710358    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:04.714298    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:04.746143    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.746143    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:04.751852    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:04.786935    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.786935    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:04.791020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:04.823023    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.824009    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:04.829026    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:04.874022    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.874022    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:04.879031    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:04.922016    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.922016    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:04.927011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:04.995010    5404 logs.go:282] 0 containers: []
	W1213 10:24:04.995010    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:04.999012    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:05.042018    5404 logs.go:282] 0 containers: []
	W1213 10:24:05.042018    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:05.042018    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:05.042018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:05.076027    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:05.076027    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:05.153035    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:05.153035    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:05.236024    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:05.236024    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:05.287022    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:05.287022    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:05.400021    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:05.386802    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.389040    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.391391    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.392475    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.394164    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:05.386802    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.389040    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.391391    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.392475    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:05.394164    5807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:06.618254    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:24:06.723876    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:24:06.724891    5404 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:24:07.906467    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:07.933189    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:07.966399    5404 logs.go:282] 0 containers: []
	W1213 10:24:07.966399    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:07.972134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:08.002177    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.002177    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:08.006010    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:08.042957    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.042957    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:08.046935    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:08.076983    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.076983    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:08.080820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:08.115943    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.115943    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:08.119946    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:08.151949    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.151949    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:08.154944    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:08.184818    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.184818    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:08.190088    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:08.226256    5404 logs.go:282] 0 containers: []
	W1213 10:24:08.226256    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:08.226256    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:08.226256    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:08.315175    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:08.304883    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.305976    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.307108    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.308682    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.309703    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:08.304883    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.305976    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.307108    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.308682    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:08.309703    5954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:08.315175    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:08.315175    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:08.343944    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:08.343944    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:08.393921    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:08.393921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:08.465461    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:08.465461    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:10.396698    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:24:10.486148    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:24:10.486148    5404 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:24:11.008093    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:11.031680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:11.067299    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.067299    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:11.070308    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:11.099068    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.099068    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:11.102235    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:11.134485    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.134485    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:11.138064    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:11.169175    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.169175    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:11.172982    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:11.202416    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.202416    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:11.208980    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:11.237813    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.237813    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:11.241992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:11.244994    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:24:11.275819    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.275819    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:11.280109    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	W1213 10:24:11.333897    5404 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:24:11.333897    5404 logs.go:282] 0 containers: []
	W1213 10:24:11.333897    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:11.333897    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:11.333897    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1213 10:24:11.333897    5404 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:24:11.337479    5404 out.go:179] * Enabled addons: 
	I1213 10:24:11.339470    5404 addons.go:530] duration metric: took 1m50.8251037s for enable addons: enabled=[]
	I1213 10:24:11.368115    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:11.368115    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:11.418765    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:11.418765    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:11.483917    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:11.483917    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:11.523253    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:11.523253    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:11.607142    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:11.597915    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.598571    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.600542    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.601720    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.602770    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:11.597915    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.598571    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.600542    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.601720    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:11.602770    6168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:14.112463    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:14.140038    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:14.176168    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.176168    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:14.182827    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:14.210109    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.210109    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:14.214815    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:14.243978    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.243978    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:14.247682    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:14.284707    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.284707    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:14.288841    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:14.321542    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.321542    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:14.325162    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:14.357267    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.357267    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:14.360571    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:14.392858    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.392858    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:14.397297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:14.428026    5404 logs.go:282] 0 containers: []
	W1213 10:24:14.428026    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:14.428026    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:14.428026    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:14.484476    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:14.484476    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:14.546468    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:14.546468    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:14.586140    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:14.586140    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:14.690128    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:14.668746    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.669545    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.681908    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.684285    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.685770    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:14.668746    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.669545    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.681908    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.684285    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:14.685770    6330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:14.690128    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:14.690128    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:17.223849    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:17.250334    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:17.281533    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.281612    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:17.284827    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:17.316680    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.316680    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:17.321551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:17.351125    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.351125    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:17.354772    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:17.383813    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.383872    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:17.389497    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:17.421169    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.421204    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:17.424508    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:17.453973    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.454023    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:17.457311    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:17.485196    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.485196    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:17.488894    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:17.516552    5404 logs.go:282] 0 containers: []
	W1213 10:24:17.516620    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:17.516620    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:17.516620    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:17.564326    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:17.564326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:17.634952    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:17.634952    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:17.674188    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:17.674188    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:17.756751    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:17.747854    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.748698    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.751422    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.752394    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.753732    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:17.747854    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.748698    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.751422    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.752394    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:17.753732    6506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:17.756751    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:17.756751    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:20.291242    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:20.321685    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:20.361537    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.361537    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:20.364858    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:20.398536    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.398536    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:20.405679    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:20.435850    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.435876    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:20.439241    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:20.468991    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.469079    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:20.472669    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:20.502197    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.502197    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:20.506440    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:20.540032    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.540084    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:20.543742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:20.574779    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.574779    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:20.580429    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:20.610779    5404 logs.go:282] 0 containers: []
	W1213 10:24:20.610779    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:20.610779    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:20.610779    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:20.685419    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:20.685419    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:20.723575    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:20.723575    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:20.804584    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:20.796165    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.797235    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.798064    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.800032    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.800832    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:20.796165    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.797235    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.798064    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.800032    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:20.800832    6657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:20.804584    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:20.804584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:20.833179    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:20.833179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:23.392414    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:23.419089    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:23.451705    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.451705    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:23.455821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:23.485523    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.485523    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:23.489921    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:23.515329    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.515329    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:23.518987    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:23.548528    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.548528    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:23.551955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:23.580273    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.580273    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:23.584319    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:23.614425    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.614425    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:23.619414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:23.652731    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.652731    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:23.658799    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:23.706105    5404 logs.go:282] 0 containers: []
	W1213 10:24:23.706105    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:23.706105    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:23.706105    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:23.772851    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:23.772851    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:23.812573    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:23.812573    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:23.895224    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:23.886378    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.888887    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.889833    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.891238    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.892113    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:23.886378    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.888887    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.889833    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.891238    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:23.892113    6826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:23.895224    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:23.895224    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:23.923051    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:23.923051    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:26.491613    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:26.514943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:26.545883    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.545940    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:26.549475    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:26.587087    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.587087    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:26.591222    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:26.629041    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.629103    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:26.635387    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:26.685965    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.685965    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:26.689966    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:26.721968    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.721968    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:26.725960    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:26.766965    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.767502    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:26.771247    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:26.807900    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.807900    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:26.810900    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:26.841904    5404 logs.go:282] 0 containers: []
	W1213 10:24:26.841904    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:26.841904    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:26.841904    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:26.879889    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:26.879889    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:26.966121    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:26.955319    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.957633    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.959010    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.960012    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.961451    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:26.955319    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.957633    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.959010    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.960012    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:26.961451    6999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:26.966121    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:26.966121    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:27.005828    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:27.005828    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:27.060920    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:27.060920    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:29.626884    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:29.668512    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:29.711901    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.711901    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:29.715896    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:29.753327    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.753327    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:29.756335    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:29.790339    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.790339    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:29.793336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:29.828749    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.829754    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:29.832737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:29.866527    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.866527    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:29.869513    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:29.904427    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.904427    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:29.908430    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:29.949080    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.949117    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:29.954050    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:29.991651    5404 logs.go:282] 0 containers: []
	W1213 10:24:29.991651    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:29.991651    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:29.991651    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:30.061124    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:30.061124    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:30.107380    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:30.107380    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:30.215966    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:30.208029    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.209088    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.210520    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.211497    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.212425    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:30.208029    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.209088    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.210520    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.211497    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:30.212425    7170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:30.215966    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:30.215966    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:30.243960    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:30.243960    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:32.815866    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:32.836861    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:32.870862    5404 logs.go:282] 0 containers: []
	W1213 10:24:32.870862    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:32.874877    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:32.906865    5404 logs.go:282] 0 containers: []
	W1213 10:24:32.906865    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:32.909859    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:32.943883    5404 logs.go:282] 0 containers: []
	W1213 10:24:32.943883    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:32.947871    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:32.986866    5404 logs.go:282] 0 containers: []
	W1213 10:24:32.986866    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:32.991866    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:33.023871    5404 logs.go:282] 0 containers: []
	W1213 10:24:33.023871    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:33.027876    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:33.063861    5404 logs.go:282] 0 containers: []
	W1213 10:24:33.063861    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:33.066861    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:33.102866    5404 logs.go:282] 0 containers: []
	W1213 10:24:33.102866    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:33.105870    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:33.138873    5404 logs.go:282] 0 containers: []
	W1213 10:24:33.138873    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:33.138873    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:33.138873    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:33.216871    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:33.216871    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:33.254864    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:33.254864    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:33.342876    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:33.331671    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.332686    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.334936    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.336057    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.336920    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:33.331671    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.332686    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.334936    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.336057    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:33.336920    7336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:33.342876    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:33.342876    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:33.372872    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:33.372872    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:35.937521    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:35.968515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:36.015519    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.015519    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:36.020516    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:36.058521    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.058521    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:36.062527    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:36.100528    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.100528    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:36.105515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:36.162523    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.162523    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:36.167528    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:36.216540    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.216540    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:36.222521    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:36.270534    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.270534    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:36.274516    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:36.317553    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.317553    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:36.323531    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:36.369522    5404 logs.go:282] 0 containers: []
	W1213 10:24:36.370532    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:36.370532    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:36.370532    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:36.427526    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:36.427526    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:36.547529    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:36.534591    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.536503    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.537947    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.539440    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.540065    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:36.534591    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.536503    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.537947    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.539440    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:36.540065    7510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:36.547529    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:36.547529    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:36.575529    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:36.575529    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:36.644529    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:36.644529    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:39.242345    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:39.262343    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:39.295955    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.295955    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:39.302106    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:39.339524    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.339524    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:39.344523    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:39.375521    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.376526    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:39.380518    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:39.412531    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.412531    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:39.418523    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:39.452520    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.452520    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:39.455518    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:39.491526    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.491526    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:39.494518    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:39.527526    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.527526    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:39.530524    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:39.563522    5404 logs.go:282] 0 containers: []
	W1213 10:24:39.563522    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:39.563522    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:39.563522    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:39.627527    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:39.627527    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:39.666521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:39.666521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:39.772015    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:39.761381    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.762320    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.764721    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.765708    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.766586    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:39.761381    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.762320    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.764721    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.765708    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:39.766586    7672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:39.772015    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:39.772015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:39.800694    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:39.800694    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:42.355775    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:42.382045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:42.416051    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.416051    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:42.420053    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:42.450051    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.450051    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:42.453044    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:42.486044    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.486044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:42.490053    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:42.521048    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.521048    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:42.525056    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:42.570048    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.570048    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:42.573046    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:42.604077    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.604077    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:42.607059    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:42.638053    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.638053    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:42.642059    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:42.670050    5404 logs.go:282] 0 containers: []
	W1213 10:24:42.670050    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:42.670050    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:42.670050    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:42.733721    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:42.733721    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:42.782564    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:42.782609    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:42.878076    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:42.866926    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.868073    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.869641    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.871705    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.873251    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:42.866926    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.868073    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.869641    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.871705    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:42.873251    7840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:42.878076    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:42.878076    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:42.908562    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:42.908562    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:45.467459    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:45.486461    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:45.519466    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.519466    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:45.523460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:45.552467    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.552467    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:45.555465    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:45.585473    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.585473    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:45.588483    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:45.616462    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.616462    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:45.619467    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:45.647731    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.647731    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:45.652769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:45.684250    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.684250    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:45.687964    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:45.722147    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.722204    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:45.728361    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:45.759216    5404 logs.go:282] 0 containers: []
	W1213 10:24:45.759216    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:45.759216    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:45.759216    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:45.821415    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:45.821415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:45.891846    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:45.891930    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:45.934467    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:45.934467    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:46.043176    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:46.033683    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.034725    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.035958    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.037076    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.038176    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:46.033683    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.034725    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.035958    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.037076    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:46.038176    8020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:46.043709    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:46.043709    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:48.573647    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:48.702983    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:48.806720    5404 logs.go:282] 0 containers: []
	W1213 10:24:48.806720    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:48.810715    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:48.850081    5404 logs.go:282] 0 containers: []
	W1213 10:24:48.850081    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:48.855078    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:48.894900    5404 logs.go:282] 0 containers: []
	W1213 10:24:48.894900    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:48.899901    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:48.937922    5404 logs.go:282] 0 containers: []
	W1213 10:24:48.937922    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:48.940902    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:48.978900    5404 logs.go:282] 0 containers: []
	W1213 10:24:48.978900    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:48.982902    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:49.016665    5404 logs.go:282] 0 containers: []
	W1213 10:24:49.016718    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:49.022974    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:49.061492    5404 logs.go:282] 0 containers: []
	W1213 10:24:49.061597    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:49.066138    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:49.106455    5404 logs.go:282] 0 containers: []
	W1213 10:24:49.106455    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:49.106455    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:49.106455    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:49.176454    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:49.176454    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:49.216459    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:49.217452    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:49.317059    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:49.305422    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.306370    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.309251    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.310213    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.312981    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:49.305422    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.306370    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.309251    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.310213    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:49.312981    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:49.317059    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:49.317059    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:49.345475    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:49.345475    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:51.905137    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:51.943257    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:51.995170    5404 logs.go:282] 0 containers: []
	W1213 10:24:51.995170    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:51.999169    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:52.033171    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.033171    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:52.036179    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:52.070177    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.070177    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:52.073171    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:52.104170    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.104170    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:52.109183    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:52.145186    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.145186    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:52.149178    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:52.192172    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.192172    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:52.195171    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:52.242251    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.242309    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:52.249508    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:52.293829    5404 logs.go:282] 0 containers: []
	W1213 10:24:52.293829    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:52.293829    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:52.293829    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:52.389816    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:52.380331    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.381645    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.382887    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.384127    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.385049    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:52.380331    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.381645    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.382887    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.384127    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:52.385049    8344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:52.389816    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:52.389816    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:52.418826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:52.418826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:52.466854    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:52.466854    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:52.533814    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:52.533814    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:55.076015    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:55.210668    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:55.241674    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.241674    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:55.244668    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:55.276674    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.276674    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:55.279666    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:55.314674    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.314674    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:55.318675    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:55.349670    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.349670    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:55.353692    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:55.387673    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.387673    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:55.392670    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:55.437676    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.437676    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:55.440683    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:55.479672    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.479672    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:55.482676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:55.521672    5404 logs.go:282] 0 containers: []
	W1213 10:24:55.521672    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:55.521672    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:55.521672    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:55.558672    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:55.558672    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:55.655888    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:55.646076    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.647258    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.648366    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.649293    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.651907    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:55.646076    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.647258    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.648366    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.649293    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:55.651907    8516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:55.655888    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:55.655888    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:55.685118    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:55.685645    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:55.744752    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:55.744798    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:24:58.312600    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:24:58.349556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:24:58.392402    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.392402    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:24:58.397388    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:24:58.438394    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.438394    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:24:58.441399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:24:58.470389    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.470389    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:24:58.473387    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:24:58.504400    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.504400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:24:58.507407    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:24:58.541395    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.541395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:24:58.545394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:24:58.575394    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.575394    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:24:58.578391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:24:58.609398    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.609398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:24:58.612389    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:24:58.642389    5404 logs.go:282] 0 containers: []
	W1213 10:24:58.642389    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:24:58.642389    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:24:58.642389    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:24:58.678396    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:24:58.678396    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:24:58.767395    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:24:58.758375    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.759329    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.761429    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.762279    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.764802    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:24:58.758375    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.759329    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.761429    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.762279    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:24:58.764802    8680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:24:58.767395    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:24:58.767395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:24:58.798394    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:24:58.798459    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:24:58.847506    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:24:58.847506    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:01.420338    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:01.443601    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:01.473306    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.473306    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:01.476311    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:01.509311    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.509311    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:01.512311    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:01.542332    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.542332    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:01.546304    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:01.577321    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.577321    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:01.580316    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:01.612309    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.612309    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:01.616313    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:01.647338    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.647338    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:01.651314    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:01.682333    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.682333    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:01.685319    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:01.720317    5404 logs.go:282] 0 containers: []
	W1213 10:25:01.720317    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:01.720317    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:01.720317    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:01.758902    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:01.758902    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:01.845064    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:01.830926    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.832174    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.835494    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.836962    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.838281    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:01.830926    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.832174    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.835494    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.836962    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:01.838281    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:01.845064    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:01.845064    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:01.888951    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:01.888951    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:01.951961    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:01.951961    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:04.532368    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:04.559167    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:04.594675    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.594675    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:04.597670    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:04.626671    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.626671    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:04.629667    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:04.660678    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.660678    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:04.663677    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:04.696034    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.696034    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:04.699032    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:04.729457    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.729512    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:04.733824    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:04.764529    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.764608    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:04.768510    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:04.805921    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.805991    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:04.810400    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:04.846673    5404 logs.go:282] 0 containers: []
	W1213 10:25:04.846673    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:04.846673    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:04.846673    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:04.874683    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:04.874683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:04.930772    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:04.930772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:04.993837    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:04.993837    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:05.037570    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:05.037570    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:05.133028    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:05.120675    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.121741    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.123009    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.125843    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.127069    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:05.120675    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.121741    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.123009    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.125843    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:05.127069    9050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:07.637145    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:07.667928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:07.703467    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.703467    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:07.707214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:07.748207    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.748207    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:07.757287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:07.793342    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.793342    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:07.797396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:07.836400    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.836400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:07.840445    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:07.885711    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.885711    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:07.889960    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:07.927755    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.927755    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:07.931759    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:07.987612    5404 logs.go:282] 0 containers: []
	W1213 10:25:07.987612    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:07.990605    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:08.023076    5404 logs.go:282] 0 containers: []
	W1213 10:25:08.023076    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:08.023076    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:08.023076    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:08.090443    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:08.090443    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:08.130350    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:08.130350    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:08.220317    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:08.207417    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.208317    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.211903    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.214163    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.215274    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:08.207417    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.208317    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.211903    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.214163    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:08.215274    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:08.220317    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:08.220317    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:08.248257    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:08.248257    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:10.803286    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:10.823279    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:10.863295    5404 logs.go:282] 0 containers: []
	W1213 10:25:10.863295    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:10.868286    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:10.907327    5404 logs.go:282] 0 containers: []
	W1213 10:25:10.907327    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:10.911290    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:10.951298    5404 logs.go:282] 0 containers: []
	W1213 10:25:10.951298    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:10.954292    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:10.987286    5404 logs.go:282] 0 containers: []
	W1213 10:25:10.987286    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:10.991298    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:11.023710    5404 logs.go:282] 0 containers: []
	W1213 10:25:11.023710    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:11.027573    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:11.058568    5404 logs.go:282] 0 containers: []
	W1213 10:25:11.058568    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:11.062575    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:11.094907    5404 logs.go:282] 0 containers: []
	W1213 10:25:11.094907    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:11.099534    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:11.130054    5404 logs.go:282] 0 containers: []
	W1213 10:25:11.130089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:11.130120    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:11.130120    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:11.206942    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:11.206942    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:11.253942    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:11.254944    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:11.354948    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:11.343389    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.345409    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.346563    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.347813    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.349209    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:11.343389    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.345409    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.346563    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.347813    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:11.349209    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:11.354948    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:11.354948    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:11.388159    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:11.388215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:13.950951    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:13.972945    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:14.006949    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.006949    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:14.010944    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:14.042942    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.042942    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:14.045935    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:14.077941    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.077941    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:14.080941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:14.113953    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.113953    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:14.116948    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:14.150588    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.150588    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:14.153589    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:14.182323    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.182323    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:14.187783    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:14.223919    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.223919    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:14.228069    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:14.265179    5404 logs.go:282] 0 containers: []
	W1213 10:25:14.265179    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:14.265179    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:14.265179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:14.328780    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:14.328780    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:14.365765    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:14.365765    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:14.451776    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:14.442164    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.443490    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.444370    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.446581    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.447477    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:14.442164    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.443490    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.444370    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.446581    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:14.447477    9531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:14.451776    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:14.451776    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:14.478778    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:14.478778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:17.039942    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:17.060609    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:17.093473    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.093473    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:17.097133    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:17.135468    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.135468    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:17.139916    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:17.172888    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.172888    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:17.177281    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:17.206954    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.206954    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:17.210967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:17.241951    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.241951    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:17.245979    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:17.276951    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.276951    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:17.279959    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:17.314962    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.314962    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:17.317952    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:17.350959    5404 logs.go:282] 0 containers: []
	W1213 10:25:17.350959    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:17.350959    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:17.350959    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:17.387958    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:17.387958    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:17.493208    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:17.481903    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.483340    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.484680    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.486033    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.487328    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:17.481903    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.483340    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.484680    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.486033    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:17.487328    9694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:17.493208    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:17.493208    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:17.522836    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:17.522836    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:17.579010    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:17.579010    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:20.164455    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:20.194632    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:20.226036    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.226036    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:20.229697    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:20.259579    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.259579    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:20.264191    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:20.300976    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.300976    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:20.303972    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:20.336328    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.336328    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:20.341858    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:20.374691    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.374691    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:20.378597    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:20.408906    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.408906    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:20.412886    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:20.442980    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.442980    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:20.447184    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:20.479994    5404 logs.go:282] 0 containers: []
	W1213 10:25:20.479994    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:20.479994    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:20.479994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:20.541228    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:20.541228    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:20.580152    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:20.580152    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:20.683619    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:20.662423    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.663786    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.675470    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.677812    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.679255    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:20.662423    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.663786    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.675470    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.677812    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:20.679255    9865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:20.683619    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:20.683619    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:20.711426    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:20.711583    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:23.263754    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:23.302408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:23.337266    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.337266    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:23.340260    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:23.370276    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.370276    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:23.375960    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:23.408917    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.408917    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:23.412904    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:23.441906    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.441906    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:23.445905    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:23.475913    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.475913    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:23.478907    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:23.537839    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.537839    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:23.543840    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:23.576844    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.576844    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:23.580844    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:23.614844    5404 logs.go:282] 0 containers: []
	W1213 10:25:23.614844    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:23.614844    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:23.614844    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:23.687842    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:23.687842    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:23.729840    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:23.729840    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:23.842844    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:23.828686   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.830655   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.832409   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.835392   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.836394   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:23.828686   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.830655   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.832409   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.835392   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:23.836394   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:23.842844    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:23.842844    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:23.881854    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:23.881854    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:26.445770    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:26.468742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:26.497606    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.497606    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:26.504336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:26.535650    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.535650    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:26.539148    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:26.574002    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.574002    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:26.577576    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:26.608168    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.608168    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:26.612250    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:26.648664    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.648664    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:26.652642    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:26.695581    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.695581    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:26.701128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:26.736805    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.736805    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:26.741531    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:26.775202    5404 logs.go:282] 0 containers: []
	W1213 10:25:26.775202    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:26.775202    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:26.775202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:26.837152    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:26.837152    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:26.907293    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:26.907293    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:26.944829    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:26.944829    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:27.035510    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:27.024018   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025149   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025873   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.028475   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.029437   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:27.024018   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025149   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.025873   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.028475   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:27.029437   10233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:27.035510    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:27.035510    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:29.569253    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:29.595146    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:29.629968    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.629968    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:29.639031    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:29.680336    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.680336    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:29.683882    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:29.711575    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.711575    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:29.715562    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:29.748700    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.748700    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:29.754102    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:29.787482    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.787482    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:29.791566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:29.820415    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.820415    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:29.824718    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:29.855879    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.855879    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:29.861271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:29.894074    5404 logs.go:282] 0 containers: []
	W1213 10:25:29.894074    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:29.894074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:29.894074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:29.959671    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:29.959671    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:30.002475    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:30.002475    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:30.082532    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:30.074335   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.075482   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.076694   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.078180   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.079305   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:30.074335   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.075482   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.076694   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.078180   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:30.079305   10383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:30.082532    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:30.082532    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:30.110237    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:30.110297    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:32.671686    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:32.693105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:32.723439    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.723439    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:32.727792    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:32.756940    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.756940    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:32.761232    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:32.791456    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.791456    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:32.800403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:32.831611    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.831687    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:32.835616    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:32.865546    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.865546    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:32.869732    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:32.902223    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.902223    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:32.906561    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:32.940346    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.940346    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:32.944320    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:32.975469    5404 logs.go:282] 0 containers: []
	W1213 10:25:32.975499    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:32.975499    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:32.975499    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:33.041207    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:33.041207    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:33.083590    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:33.083590    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:33.180935    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:33.168485   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.169714   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.171003   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.172674   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.174027   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:33.168485   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.169714   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.171003   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.172674   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:33.174027   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:33.180935    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:33.180935    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:33.210089    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:33.210152    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:35.768098    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:35.860177    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:35.889000    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.889000    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:35.896003    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:35.925040    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.925040    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:35.930032    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:35.958609    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.958609    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:35.962133    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:35.992299    5404 logs.go:282] 0 containers: []
	W1213 10:25:35.992362    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:35.996377    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:36.026302    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.026302    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:36.029926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:36.059528    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.059528    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:36.063110    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:36.093396    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.093396    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:36.097255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:36.126154    5404 logs.go:282] 0 containers: []
	W1213 10:25:36.126154    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:36.126154    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:36.126154    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:36.163586    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:36.164570    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:36.247461    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:36.234933   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.237658   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.238961   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.241820   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.243104   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:36.234933   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.237658   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.238961   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.241820   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:36.243104   10712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:36.247461    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:36.247461    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:36.274462    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:36.274462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:36.322858    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:36.322858    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:38.892110    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:38.918061    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:38.952306    5404 logs.go:282] 0 containers: []
	W1213 10:25:38.952306    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:38.956376    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:38.984034    5404 logs.go:282] 0 containers: []
	W1213 10:25:38.984034    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:38.988175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:39.018071    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.018071    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:39.022189    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:39.056215    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.056285    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:39.060000    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:39.089755    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.089755    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:39.093043    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:39.127383    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.127457    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:39.130982    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:39.159645    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.159645    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:39.163350    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:39.192096    5404 logs.go:282] 0 containers: []
	W1213 10:25:39.192179    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:39.192179    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:39.192179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:39.223185    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:39.223313    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:39.274723    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:39.274723    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:39.340519    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:39.340519    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:39.383564    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:39.383564    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:39.468710    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:39.457661   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.458930   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.461566   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.463030   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.464232   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:39.457661   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.458930   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.461566   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.463030   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:39.464232   10903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:41.972444    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:42.004328    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:42.049826    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.049826    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:42.055261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:42.086954    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.086954    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:42.092499    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:42.135541    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.135541    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:42.137788    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:42.175806    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.175922    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:42.181557    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:42.217489    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.217489    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:42.222304    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:42.252826    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.252826    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:42.257410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:42.294098    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.294098    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:42.298088    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:42.329092    5404 logs.go:282] 0 containers: []
	W1213 10:25:42.329092    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:42.329092    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:42.329092    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:42.390575    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:42.390575    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:42.474343    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:42.474343    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:42.524422    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:42.524422    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:42.618600    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:42.608954   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610024   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610906   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.613189   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.614616   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:42.608954   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610024   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.610906   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.613189   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:42.614616   11067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:42.618600    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:42.618600    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:45.152316    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:45.174306    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:45.207310    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.207310    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:45.210309    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:45.238013    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.238013    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:45.241522    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:45.277528    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.277528    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:45.281057    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:45.310750    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.310750    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:45.314483    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:45.352031    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.352031    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:45.355035    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:45.386619    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.386619    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:45.390619    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:45.424279    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.424279    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:45.428270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:45.458271    5404 logs.go:282] 0 containers: []
	W1213 10:25:45.458271    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:45.458271    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:45.458271    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:45.522619    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:45.522619    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:45.562726    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:45.562726    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:45.647172    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:45.636542   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.637634   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.638735   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.640327   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.642687   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:45.636542   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.637634   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.638735   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.640327   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:45.642687   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:45.647172    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:45.647172    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:45.685304    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:45.685304    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:48.245250    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:48.264253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:48.303617    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.303617    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:48.307333    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:48.340820    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.340820    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:48.344802    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:48.381814    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.381814    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:48.385808    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:48.425330    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.425330    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:48.429331    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:48.462632    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.462632    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:48.467247    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:48.505972    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.505972    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:48.510971    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:48.538965    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.538965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:48.542968    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:48.571976    5404 logs.go:282] 0 containers: []
	W1213 10:25:48.571976    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:48.571976    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:48.571976    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:48.639975    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:48.639975    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:48.675969    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:48.675969    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:48.764445    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:48.753486   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.754879   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.756951   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.758329   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.759587   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:48.753486   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.754879   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.756951   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.758329   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:48.759587   11379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:48.764445    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:48.764445    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:48.795028    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:48.796033    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:51.360604    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:51.386834    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:51.420844    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.420844    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:51.423830    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:51.454840    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.454840    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:51.457831    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:51.487200    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.487200    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:51.491050    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:51.523378    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.523378    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:51.527449    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:51.560764    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.560764    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:51.563766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:51.592770    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.592770    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:51.595756    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:51.624758    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.624758    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:51.627755    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:51.660771    5404 logs.go:282] 0 containers: []
	W1213 10:25:51.660771    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:51.660771    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:51.660771    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:51.723762    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:51.723762    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:51.758760    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:51.758760    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:51.846775    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:51.839620   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.840595   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.841762   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.842893   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.843941   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:51.839620   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.840595   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.841762   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.842893   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:51.843941   11543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:51.846775    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:51.846775    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:51.875761    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:51.875761    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:54.430333    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:54.454434    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:54.493825    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.493825    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:54.497512    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:54.533494    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.533494    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:54.538683    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:54.564773    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.564773    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:54.568433    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:54.605317    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.605317    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:54.609855    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:54.640802    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.640802    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:54.645532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:54.677470    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.677470    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:54.683512    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:54.716223    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.716223    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:54.720204    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:54.752249    5404 logs.go:282] 0 containers: []
	W1213 10:25:54.752295    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:54.752346    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:54.752346    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:54.824990    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:54.824990    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:54.861528    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:54.861528    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:54.947890    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:54.939731   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.941493   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.942789   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.944407   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.945507   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:54.939731   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.941493   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.942789   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.944407   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:54.945507   11707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:54.947890    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:54.947890    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:54.978543    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:54.978543    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:25:57.539868    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:25:57.563007    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:25:57.597162    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.597219    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:25:57.601488    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:25:57.633273    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.633273    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:25:57.637275    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:25:57.666277    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.666277    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:25:57.671269    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:25:57.706080    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.706080    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:25:57.709089    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:25:57.743977    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.744028    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:25:57.747842    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:25:57.776566    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.776566    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:25:57.780492    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:25:57.815351    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.815386    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:25:57.819106    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:25:57.854910    5404 logs.go:282] 0 containers: []
	W1213 10:25:57.854910    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:25:57.854910    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:25:57.854910    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:25:57.917747    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:25:57.917747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:25:57.956537    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:25:57.956537    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:25:58.040821    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:25:58.031929   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.033046   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.035980   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.037154   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.038464   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:25:58.031929   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.033046   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.035980   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.037154   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:25:58.038464   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:25:58.040821    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:25:58.040821    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:25:58.070378    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:25:58.070378    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:00.628331    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:00.655322    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:00.699337    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.699337    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:00.706348    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:00.750326    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.750326    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:00.755322    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:00.800324    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.800324    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:00.805326    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:00.864335    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.864335    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:00.870325    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:00.930337    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.930337    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:00.935326    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:00.969332    5404 logs.go:282] 0 containers: []
	W1213 10:26:00.969332    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:00.973332    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:01.016343    5404 logs.go:282] 0 containers: []
	W1213 10:26:01.016343    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:01.020342    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:01.057324    5404 logs.go:282] 0 containers: []
	W1213 10:26:01.057324    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:01.057324    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:01.057324    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:01.112326    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:01.112326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:01.180335    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:01.181341    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:01.227325    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:01.227325    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:01.329208    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:01.321405   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.322672   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.323441   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.325880   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.326997   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:01.321405   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.322672   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.323441   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.325880   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:01.326997   12057 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:01.329208    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:01.329208    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:03.863205    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:03.883197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:03.913189    5404 logs.go:282] 0 containers: []
	W1213 10:26:03.913189    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:03.916793    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:03.948649    5404 logs.go:282] 0 containers: []
	W1213 10:26:03.948649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:03.953009    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:03.983634    5404 logs.go:282] 0 containers: []
	W1213 10:26:03.983634    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:03.987881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:04.021702    5404 logs.go:282] 0 containers: []
	W1213 10:26:04.021702    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:04.024698    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:04.054698    5404 logs.go:282] 0 containers: []
	W1213 10:26:04.054698    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:04.057697    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:04.088159    5404 logs.go:282] 0 containers: []
	W1213 10:26:04.088159    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:04.092055    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:04.127161    5404 logs.go:282] 0 containers: []
	W1213 10:26:04.127246    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:04.132756    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:04.178380    5404 logs.go:282] 0 containers: []
	W1213 10:26:04.178488    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:04.178488    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:04.178527    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:04.252804    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:04.252804    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:04.291823    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:04.291823    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:04.374390    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:04.364826   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.365685   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.367791   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.369125   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.370088   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:04.364826   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.365685   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.367791   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.369125   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:04.370088   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:04.374390    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:04.374390    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:04.401790    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:04.401790    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:06.948650    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:06.970635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:07.002793    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.002793    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:07.006410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:07.041279    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.041279    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:07.047699    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:07.079332    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.079332    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:07.082587    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:07.119271    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.119271    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:07.124093    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:07.174682    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.174682    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:07.178686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:07.216699    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.216699    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:07.220688    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:07.256112    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.256112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:07.261153    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:07.293183    5404 logs.go:282] 0 containers: []
	W1213 10:26:07.293183    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:07.293183    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:07.293183    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:07.369599    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:07.369599    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:07.408599    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:07.408599    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:07.489441    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:07.480280   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.481382   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.482733   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.483725   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.485020   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:07.480280   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.481382   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.482733   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.483725   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:07.485020   12371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:07.489441    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:07.489441    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:07.522038    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:07.522038    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:10.084704    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:10.113920    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:10.150212    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.150212    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:10.154778    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:10.189356    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.189356    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:10.193350    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:10.223980    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.223980    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:10.228854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:10.263081    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.263081    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:10.266079    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:10.303634    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.303634    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:10.306649    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:10.340741    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.340741    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:10.344506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:10.380770    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.380798    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:10.385668    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:10.418448    5404 logs.go:282] 0 containers: []
	W1213 10:26:10.418448    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:10.418448    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:10.418448    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:10.489397    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:10.489397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:10.530382    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:10.530382    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:10.619795    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:10.610335   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.611594   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.612727   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.613769   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.614998   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:10.610335   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.611594   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.612727   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.613769   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:10.614998   12529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:10.619795    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:10.619795    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:10.648514    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:10.648514    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:13.207517    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:13.232602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:13.265181    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.265181    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:13.270189    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:13.303643    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.303643    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:13.307165    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:13.336420    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.336420    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:13.340443    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:13.368099    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.368099    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:13.371704    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:13.404919    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.404919    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:13.408481    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:13.439031    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.439031    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:13.442783    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:13.475089    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.475135    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:13.478994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:13.511589    5404 logs.go:282] 0 containers: []
	W1213 10:26:13.511589    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:13.511589    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:13.511589    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:13.578647    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:13.578647    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:13.617130    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:13.617130    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:13.702509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:13.691997   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.692885   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.695853   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.697311   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.698362   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:13.691997   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.692885   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.695853   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.697311   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:13.698362   12696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:13.702509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:13.702509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:13.729740    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:13.729824    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:16.288607    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:16.322048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:16.357027    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.357027    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:16.360975    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:16.397415    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.397538    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:16.403317    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:16.435514    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.435514    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:16.439108    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:16.469782    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.469782    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:16.476710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:16.521301    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.521301    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:16.526288    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:16.553758    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.553758    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:16.557353    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:16.591596    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.591596    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:16.595609    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:16.634961    5404 logs.go:282] 0 containers: []
	W1213 10:26:16.634961    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:16.634961    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:16.634961    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:16.725483    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:16.715343   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.716485   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.717488   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.719854   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.721850   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:16.715343   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.716485   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.717488   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.719854   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:16.721850   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:16.725483    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:16.725483    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:16.755631    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:16.755631    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:16.815288    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:16.815288    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:16.885926    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:16.885926    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:19.433594    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:19.452596    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:19.484432    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.484432    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:19.488123    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:19.521726    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.521726    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:19.525725    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:19.558352    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.558352    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:19.562694    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:19.595525    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.595525    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:19.599804    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:19.633595    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.633595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:19.637595    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:19.665605    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.665605    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:19.668594    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:19.699869    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.700400    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:19.704619    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:19.734549    5404 logs.go:282] 0 containers: []
	W1213 10:26:19.734549    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:19.734549    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:19.734549    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:19.800173    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:19.800173    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:19.841046    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:19.841046    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:19.932560    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:19.924154   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.924826   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.927239   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.928424   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.929578   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:19.924154   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.924826   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.927239   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.928424   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:19.929578   13035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:19.932560    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:19.932560    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:19.982033    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:19.982033    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:22.543135    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:22.567276    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:22.601122    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.601186    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:22.604870    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:22.641272    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.641272    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:22.649542    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:22.679498    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.679498    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:22.683789    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:22.718001    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.718001    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:22.721010    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:22.749051    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.749051    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:22.753040    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:22.785441    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.785441    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:22.789368    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:22.819841    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.819841    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:22.824297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:22.858214    5404 logs.go:282] 0 containers: []
	W1213 10:26:22.858696    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:22.858696    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:22.858696    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:22.944433    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:22.937043   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.937851   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.939938   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.940875   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.941943   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:22.937043   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.937851   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.939938   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.940875   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:22.941943   13193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:22.944433    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:22.944433    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:22.974426    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:22.974476    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:23.028743    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:23.028743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:23.096646    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:23.096646    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:25.640338    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:25.664590    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:25.699695    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.699695    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:25.703651    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:25.732063    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.732063    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:25.735870    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:25.768031    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.768031    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:25.771330    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:25.806047    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.806047    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:25.809583    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:25.841003    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.841003    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:25.845216    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:25.876445    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.876445    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:25.880871    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:25.915579    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.915579    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:25.921186    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:25.951170    5404 logs.go:282] 0 containers: []
	W1213 10:26:25.951170    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:25.951170    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:25.951170    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:26.016230    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:26.016318    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:26.054563    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:26.054563    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:26.148918    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:26.137123   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.139108   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.141918   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.143563   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.144499   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:26.137123   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.139108   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.141918   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.143563   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:26.144499   13363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:26.148965    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:26.148965    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:26.177858    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:26.177858    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:28.736339    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:28.763194    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:28.796718    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.796718    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:28.800982    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:28.836044    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.836044    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:28.840211    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:28.872122    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.872122    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:28.875127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:28.906696    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.906696    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:28.910695    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:28.940145    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.940145    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:28.944977    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:28.976235    5404 logs.go:282] 0 containers: []
	W1213 10:26:28.976235    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:28.979862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:29.009923    5404 logs.go:282] 0 containers: []
	W1213 10:26:29.009923    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:29.013924    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:29.045514    5404 logs.go:282] 0 containers: []
	W1213 10:26:29.045514    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:29.045514    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:29.045514    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:29.110318    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:29.110318    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:29.149172    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:29.150199    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:29.235487    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:29.226416   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.227337   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.229602   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.230773   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.232063   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:29.226416   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.227337   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.229602   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.230773   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:29.232063   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:29.235487    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:29.235487    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:29.265561    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:29.265561    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:31.831462    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:31.859360    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:31.894190    5404 logs.go:282] 0 containers: []
	W1213 10:26:31.894190    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:31.898985    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:31.929699    5404 logs.go:282] 0 containers: []
	W1213 10:26:31.929699    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:31.933698    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:31.963148    5404 logs.go:282] 0 containers: []
	W1213 10:26:31.963148    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:31.967963    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:32.001601    5404 logs.go:282] 0 containers: []
	W1213 10:26:32.001601    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:32.005677    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:32.038481    5404 logs.go:282] 0 containers: []
	W1213 10:26:32.038481    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:32.042558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:32.074543    5404 logs.go:282] 0 containers: []
	W1213 10:26:32.074543    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:32.080775    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:32.110290    5404 logs.go:282] 0 containers: []
	W1213 10:26:32.110290    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:32.114549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:32.143028    5404 logs.go:282] 0 containers: []
	W1213 10:26:32.143028    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:32.143028    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:32.143028    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:32.208894    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:32.208894    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:32.249787    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:32.249787    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:32.337210    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:32.325667   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.326852   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.327627   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.329903   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.331076   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:32.325667   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.326852   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.327627   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.329903   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:32.331076   13698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:32.337210    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:32.337210    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:32.364829    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:32.364829    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:34.921659    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:34.950803    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:34.982219    5404 logs.go:282] 0 containers: []
	W1213 10:26:34.982219    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:34.985789    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:35.012811    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.012811    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:35.016291    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:35.048172    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.048172    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:35.051903    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:35.080133    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.080190    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:35.083771    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:35.117000    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.117066    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:35.120018    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:35.149857    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.149857    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:35.153926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:35.184195    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.184195    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:35.188376    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:35.217448    5404 logs.go:282] 0 containers: []
	W1213 10:26:35.217448    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:35.217448    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:35.217448    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:35.281922    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:35.281922    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:35.322729    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:35.322729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:35.409012    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:35.397044   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.398597   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.399810   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.401182   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.401939   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:35.397044   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.398597   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.399810   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.401182   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:35.401939   13864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:35.409012    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:35.409012    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:35.436286    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:35.436286    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:37.991540    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:38.018244    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:38.050786    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.050805    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:38.054442    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:38.083499    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.083576    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:38.086991    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:38.114199    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.114199    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:38.118029    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:38.146430    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.146430    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:38.150385    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:38.181920    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.181963    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:38.185428    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:38.215038    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.215038    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:38.218488    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:38.248657    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.248718    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:38.252582    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:38.285798    5404 logs.go:282] 0 containers: []
	W1213 10:26:38.285860    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:38.285860    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:38.285860    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:38.350878    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:38.350878    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:38.390123    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:38.390123    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:38.478907    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:38.468138   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.469453   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.470815   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.472174   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.473592   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:38.468138   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.469453   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.470815   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.472174   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:38.473592   14028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:38.478907    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:38.478907    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:38.507763    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:38.507857    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:41.071954    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:41.092438    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:41.126667    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.126667    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:41.131248    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:41.162456    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.162496    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:41.166664    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:41.202947    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.203031    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:41.207748    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:41.246702    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.246702    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:41.252360    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:41.288944    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.288992    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:41.292675    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:41.326574    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.326574    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:41.332832    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:41.370075    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.370075    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:41.374075    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:41.405629    5404 logs.go:282] 0 containers: []
	W1213 10:26:41.405629    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:41.405629    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:41.405629    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:41.480867    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:41.480867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:41.517299    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:41.517299    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:41.634720    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:41.624925   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.626417   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.627205   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.629031   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.630085   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:41.624925   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.626417   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.627205   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.629031   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:41.630085   14188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:41.634720    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:41.634720    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:41.683716    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:41.683716    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:44.251620    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:44.277202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:44.313143    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.313143    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:44.316146    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:44.347160    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.347160    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:44.350141    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:44.382442    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.382442    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:44.385435    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:44.420611    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.420666    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:44.425282    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:44.452708    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.452708    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:44.456875    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:44.484936    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.484936    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:44.488686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:44.523624    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.523624    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:44.526916    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:44.564293    5404 logs.go:282] 0 containers: []
	W1213 10:26:44.564293    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:44.564293    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:44.564293    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:44.628702    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:44.628702    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:44.704695    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:44.704695    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:44.741701    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:44.741701    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:44.826846    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:44.816235   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.817036   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.819027   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.820189   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.821265   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:44.816235   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.817036   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.819027   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.820189   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:44.821265   14386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:44.826846    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:44.826846    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:47.367861    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:47.394080    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:47.429219    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.429219    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:47.432215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:47.465263    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.465263    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:47.469430    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:47.502553    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.502604    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:47.506067    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:47.542396    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.542449    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:47.547790    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:47.590990    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.591041    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:47.595931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:47.641277    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.641277    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:47.646024    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:47.699474    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.699474    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:47.702460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:47.734463    5404 logs.go:282] 0 containers: []
	W1213 10:26:47.734463    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:47.734463    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:47.735462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:47.802731    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:47.802769    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:47.839164    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:47.839164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:47.927204    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:47.917157   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.918265   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.919287   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.921206   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.922405   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:47.917157   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.918265   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.919287   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.921206   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:47.922405   14544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:47.927204    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:47.927746    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:47.965985    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:47.965985    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:50.528200    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:50.553042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:50.592041    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.592096    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:50.596824    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:50.633012    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.633012    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:50.637524    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:50.679752    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.679752    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:50.683740    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:50.714739    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.714739    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:50.718742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:50.750587    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.750643    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:50.754903    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:50.786006    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.786006    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:50.789004    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:50.825994    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.826027    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:50.831217    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:50.864808    5404 logs.go:282] 0 containers: []
	W1213 10:26:50.864808    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:50.864808    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:50.864808    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:50.955426    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:50.955480    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:50.997456    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:50.997456    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:51.105229    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:51.097843   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.099025   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.100006   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.101075   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.102071   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:51.097843   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.099025   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.100006   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.101075   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:51.102071   14704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:51.105229    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:51.105229    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:51.141731    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:51.141731    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:53.711698    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:53.731586    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:53.765367    5404 logs.go:282] 0 containers: []
	W1213 10:26:53.765367    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:53.769994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:53.807934    5404 logs.go:282] 0 containers: []
	W1213 10:26:53.807960    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:53.813111    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:53.846550    5404 logs.go:282] 0 containers: []
	W1213 10:26:53.846550    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:53.850543    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:53.885534    5404 logs.go:282] 0 containers: []
	W1213 10:26:53.885534    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:53.896550    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:53.933544    5404 logs.go:282] 0 containers: []
	W1213 10:26:53.933544    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:53.938541    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:54.013267    5404 logs.go:282] 0 containers: []
	W1213 10:26:54.013267    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:54.016261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:54.046262    5404 logs.go:282] 0 containers: []
	W1213 10:26:54.046262    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:54.050261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:54.083271    5404 logs.go:282] 0 containers: []
	W1213 10:26:54.083271    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:54.083271    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:54.083271    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:54.147260    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:54.147260    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:54.190311    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:54.190311    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:54.291264    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:54.280434   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.281210   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.283649   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.285825   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.286440   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:54.280434   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.281210   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.283649   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.285825   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:54.286440   14869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:54.291264    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:54.291264    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:54.320273    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:54.320273    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:26:56.886687    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:56.916697    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:26:56.961695    5404 logs.go:282] 0 containers: []
	W1213 10:26:56.961695    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:26:56.965685    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:26:57.011692    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.011692    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:26:57.016685    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:26:57.058684    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.058684    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:26:57.062682    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:26:57.110695    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.110695    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:26:57.114697    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:26:57.157680    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.157680    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:26:57.162687    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:26:57.211687    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.211687    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:26:57.216684    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:26:57.256691    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.256691    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:26:57.260695    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:26:57.304686    5404 logs.go:282] 0 containers: []
	W1213 10:26:57.304686    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:26:57.304686    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:26:57.304686    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:26:57.380699    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:26:57.380699    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:26:57.429707    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:26:57.429707    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:26:57.546697    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:26:57.529164   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.530461   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.540041   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.541307   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.542641   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:26:57.529164   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.530461   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.540041   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.541307   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:26:57.542641   15028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:26:57.546697    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:26:57.546697    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:26:57.576698    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:26:57.577708    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:00.159889    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:00.193269    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:00.235861    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.236865    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:00.239855    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:00.266862    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.266862    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:00.271873    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:00.312216    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.312744    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:00.317400    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:00.346906    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.346906    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:00.349905    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:00.384659    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.384659    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:00.390650    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:00.434502    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.434502    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:00.438075    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:00.475029    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.475065    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:00.479144    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:00.510743    5404 logs.go:282] 0 containers: []
	W1213 10:27:00.510743    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:00.510743    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:00.510743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:00.573638    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:00.573638    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:00.660341    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:00.660341    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:00.702338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:00.702338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:00.798666    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:00.787440   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.788559   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.791807   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.793147   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.794168   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:00.787440   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.788559   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.791807   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.793147   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:00.794168   15220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:00.798666    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:00.798666    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:03.336688    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:03.374576    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:03.446451    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.446451    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:03.449448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:03.488344    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.488377    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:03.491263    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:03.532629    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.532629    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:03.537785    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:03.591418    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.591418    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:03.596262    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:03.643105    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.643184    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:03.648150    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:03.709507    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.709549    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:03.714411    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:03.777601    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.777648    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:03.784175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:03.819743    5404 logs.go:282] 0 containers: []
	W1213 10:27:03.819743    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:03.819743    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:03.819743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:03.888768    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:03.888768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:03.948931    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:03.948931    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:04.044848    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:04.034732   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.035859   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.037042   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.038233   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.039095   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:04.034732   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.035859   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.037042   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.038233   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:04.039095   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:04.044848    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:04.044848    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:04.071850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:04.071850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	* Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-307000 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307000
helpers_test.go:244: (dbg) docker inspect newest-cni-307000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e",
	        "Created": "2025-12-13T10:11:37.912113644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:07.362257704Z",
	            "FinishedAt": "2025-12-13T10:22:04.657974104Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e-json.log",
	        "Name": "/newest-cni-307000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-307000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307000",
	                "Source": "/var/lib/docker/volumes/newest-cni-307000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307000",
	                "name.minikube.sigs.k8s.io": "newest-cni-307000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac71d39e43dea35bc9d6021f600e0a448ae9dca45dd0a410ca179f856b12121e",
	            "SandboxKey": "/var/run/docker/netns/ac71d39e43de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53943"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-307000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "091d798055d24cd11a8819044665f960a2f1124bb052fb661c5793e42aeec481",
	                    "EndpointID": "d344064538b6f36208f8c5d92ef1203acaac8ed63c99703b04ed68908d156813",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307000",
	                        "cc243490f404"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (598.2845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25: (1.1846934s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-416400 sudo iptables -t nat -L -n -v                                 │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status kubelet --all --full --no-pager         │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat kubelet --no-pager                         │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status docker --all --full --no-pager          │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat docker --no-pager                          │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/docker/daemon.json                              │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo docker system info                                       │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat cri-docker --no-pager                      │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cri-dockerd --version                                    │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status containerd --all --full --no-pager      │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat containerd --no-pager                      │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /lib/systemd/system/containerd.service               │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/containerd/config.toml                          │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo containerd config dump                                   │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status crio --all --full --no-pager            │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ ssh     │ -p bridge-416400 sudo systemctl cat crio --no-pager                            │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo crio config                                              │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ delete  │ -p bridge-416400                                                               │ bridge-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:27:08
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:27:08.467331    8476 out.go:360] Setting OutFile to fd 1212 ...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.510327    8476 out.go:374] Setting ErrFile to fd 1652...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.525338    8476 out.go:368] Setting JSON to false
	I1213 10:27:08.528326    8476 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7435,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:27:08.529330    8476 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:27:08.533334    8476 out.go:179] * [kubenet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:27:08.536332    8476 notify.go:221] Checking for updates...
	I1213 10:27:08.538327    8476 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:27:08.541325    8476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:27:08.543338    8476 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:27:08.545327    8476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:27:08.547331    8476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:27:08.550333    8476 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:27:08.665330    8476 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:27:08.669336    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:08.911222    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:08.888781942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:08.914226    8476 out.go:179] * Using the docker driver based on user configuration
	I1213 10:27:08.917218    8476 start.go:309] selected driver: docker
	I1213 10:27:08.917218    8476 start.go:927] validating driver "docker" against <nil>
	I1213 10:27:08.917218    8476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:27:09.005866    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:09.274907    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:09.25177994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:09.275859    8476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:27:09.275859    8476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:27:09.278852    8476 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:27:09.281854    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:09.281854    8476 start.go:353] cluster config:
	{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:09.284873    8476 out.go:179] * Starting "kubenet-416400" primary control-plane node in "kubenet-416400" cluster
	I1213 10:27:09.288885    8476 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:27:09.290853    8476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:27:09.296882    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:09.296882    8476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:27:09.296882    8476 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:27:09.296882    8476 cache.go:65] Caching tarball of preloaded images
	I1213 10:27:09.297854    8476 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:27:09.297854    8476 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:27:09.297854    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:09.297854    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json: {Name:mk0f8afb036d1878ac71666ce4d58fd434d1389e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:09.364866    8476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:27:09.364866    8476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:27:09.364866    8476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:27:09.364866    8476 start.go:360] acquireMachinesLock for kubenet-416400: {Name:mk28dcadbda914f3b76421bc1eef202d654b5e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:27:09.365883    8476 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-416400"
	I1213 10:27:09.365883    8476 start.go:93] Provisioning new machine with config: &{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:27:09.365883    8476 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:09.368853    8476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:27:09.369855    8476 start.go:159] libmachine.API.Create for "kubenet-416400" (driver="docker")
	I1213 10:27:09.369855    8476 client.go:173] LocalClient.Create starting
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.375556    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:27:09.428532    8476 cli_runner.go:211] docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:27:09.431540    8476 network_create.go:284] running [docker network inspect kubenet-416400] to gather additional debugging logs...
	I1213 10:27:09.431540    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400
	W1213 10:27:09.477538    8476 cli_runner.go:211] docker network inspect kubenet-416400 returned with exit code 1
	I1213 10:27:09.477538    8476 network_create.go:287] error running [docker network inspect kubenet-416400]: docker network inspect kubenet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-416400 not found
	I1213 10:27:09.477538    8476 network_create.go:289] output of [docker network inspect kubenet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-416400 not found
	
	** /stderr **
	I1213 10:27:09.481534    8476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:27:09.553692    8476 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.568537    8476 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.580557    8476 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4c0f0}
	I1213 10:27:09.581551    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:27:09.584547    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.637542    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.637542    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.637542    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:27:09.664108    8476 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.678099    8476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001885710}
	I1213 10:27:09.678099    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:27:09.682098    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.738074    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.738074    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.738074    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:27:09.757990    8476 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.771930    8476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001910480}
	I1213 10:27:09.772001    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:27:09.775120    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	I1213 10:27:09.917706    8476 network_create.go:108] docker network kubenet-416400 192.168.85.0/24 created
	I1213 10:27:09.917706    8476 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-416400" container
	I1213 10:27:09.926674    8476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:27:09.990344    8476 cli_runner.go:164] Run: docker volume create kubenet-416400 --label name.minikube.sigs.k8s.io=kubenet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:27:10.043336    8476 oci.go:103] Successfully created a docker volume kubenet-416400
	I1213 10:27:10.046336    8476 cli_runner.go:164] Run: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:27:11.508914    8476 cli_runner.go:217] Completed: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4625571s)
	I1213 10:27:11.508914    8476 oci.go:107] Successfully prepared a docker volume kubenet-416400
	I1213 10:27:11.508914    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:11.508914    8476 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:27:11.513316    8476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:26.833120    8476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.3195505s)
	I1213 10:27:26.833120    8476 kic.go:203] duration metric: took 15.3239811s to extract preloaded images to volume ...
	I1213 10:27:26.839444    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:27.097722    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:27.079878659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:27.101719    8476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:27:27.338932    8476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-416400 --name kubenet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-416400 --network kubenet-416400 --ip 192.168.85.2 --volume kubenet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:27:28.058796    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Running}}
	I1213 10:27:28.125687    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.182686    8476 cli_runner.go:164] Run: docker exec kubenet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:27:28.308932    8476 oci.go:144] the created container "kubenet-416400" has a running status.
	I1213 10:27:28.308932    8476 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:28.438434    8476 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:28.513430    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.575704    8476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:27:28.575704    8476 kic_runner.go:114] Args: [docker exec --privileged kubenet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:27:28.715410    8476 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:31.090843    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:31.148980    8476 machine.go:94] provisionDockerMachine start ...
	I1213 10:27:31.152618    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.213696    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.227691    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.227691    8476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:27:31.426494    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.426494    8476 ubuntu.go:182] provisioning hostname "kubenet-416400"
	I1213 10:27:31.430633    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.483323    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.484332    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.484332    8476 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-416400 && echo "kubenet-416400" | sudo tee /etc/hostname
	I1213 10:27:31.695552    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.701394    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.759724    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.759724    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.759724    8476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:27:31.957771    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:27:31.957771    8476 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:27:31.957771    8476 ubuntu.go:190] setting up certificates
	I1213 10:27:31.957771    8476 provision.go:84] configureAuth start
	I1213 10:27:31.961622    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:32.029795    8476 provision.go:143] copyHostCerts
	I1213 10:27:32.030302    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:27:32.030343    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:27:32.030585    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:27:32.031834    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:27:32.031890    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:27:32.032201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:27:32.033307    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:27:32.033341    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:27:32.033717    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:27:32.034519    8476 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-416400 san=[127.0.0.1 192.168.85.2 kubenet-416400 localhost minikube]
	I1213 10:27:32.150424    8476 provision.go:177] copyRemoteCerts
	I1213 10:27:32.155416    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:27:32.160422    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.214413    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:32.367375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:27:32.404881    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:27:32.437627    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:27:32.464627    8476 provision.go:87] duration metric: took 506.8482ms to configureAuth
	I1213 10:27:32.464627    8476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:27:32.465634    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:32.469262    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.530015    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.530111    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.530111    8476 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:27:32.727229    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:27:32.727229    8476 ubuntu.go:71] root file system type: overlay
	I1213 10:27:32.727229    8476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:27:32.730229    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.781835    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.782115    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.782115    8476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:27:32.980566    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:27:32.985113    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:33.047448    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:33.048094    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:33.048138    8476 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:34.752548    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:27:32.964414860 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:27:34.752590    8476 machine.go:97] duration metric: took 3.6035571s to provisionDockerMachine
	I1213 10:27:34.752590    8476 client.go:176] duration metric: took 25.382363s to LocalClient.Create
	I1213 10:27:34.752660    8476 start.go:167] duration metric: took 25.3823991s to libmachine.API.Create "kubenet-416400"
	I1213 10:27:34.752660    8476 start.go:293] postStartSetup for "kubenet-416400" (driver="docker")
	I1213 10:27:34.752689    8476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:27:34.757321    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:27:34.760792    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:34.815346    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:34.967363    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:27:34.976448    8476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:27:34.976489    8476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:27:34.976523    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:27:34.976670    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:27:34.977231    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:27:34.981302    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:27:34.993858    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:27:35.021854    8476 start.go:296] duration metric: took 269.1608ms for postStartSetup
	I1213 10:27:35.027861    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.080870    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:35.089862    8476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:27:35.093865    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.150107    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.268185    8476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:27:35.276190    8476 start.go:128] duration metric: took 25.9099265s to createHost
	I1213 10:27:35.276190    8476 start.go:83] releasing machines lock for "kubenet-416400", held for 25.9099265s
	I1213 10:27:35.279209    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.343302    8476 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:27:35.346842    8476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:27:35.350867    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.352295    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.411739    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.414747    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	W1213 10:27:35.548301    8476 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:27:35.553481    8476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:35.573784    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:27:35.585474    8476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:27:35.589468    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:27:35.633416    8476 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:27:35.633416    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:35.633416    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:35.633416    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:27:35.649009    8476 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:27:35.649009    8476 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:27:35.671618    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:27:35.696739    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:27:35.711492    8476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:27:35.715488    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:27:35.732484    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.752096    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:27:35.772619    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.796702    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:27:35.815300    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:27:35.839600    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:27:35.861332    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:27:35.884116    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:27:35.903094    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:27:35.919226    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.090670    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:27:36.249395    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:36.249395    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:36.253347    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:27:36.275349    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.297606    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:27:36.328195    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.353573    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:27:36.372805    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:27:36.406354    8476 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:27:36.417745    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:27:36.432809    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1213 10:27:36.462872    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:27:36.616454    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:27:36.759020    8476 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:27:36.759020    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:27:36.784951    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:27:36.811665    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.964769    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:27:37.921141    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:27:37.944144    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:27:37.967237    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:37.988498    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:27:38.188916    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:27:38.358397    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.521403    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:27:38.546402    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:27:38.569221    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.730646    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:27:38.878189    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:38.898180    8476 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:27:38.902189    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:27:38.911194    8476 start.go:564] Will wait 60s for crictl version
	I1213 10:27:38.916189    8476 ssh_runner.go:195] Run: which crictl
	I1213 10:27:38.926186    8476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:27:38.973186    8476 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:27:38.978795    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:39.038631    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:39.102779    8476 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:27:39.107988    8476 cli_runner.go:164] Run: docker exec -t kubenet-416400 dig +short host.docker.internal
	I1213 10:27:39.257345    8476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:27:39.260347    8476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:27:39.268341    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.287341    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:39.347887    8476 kubeadm.go:884] updating cluster {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:27:39.347887    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:39.352726    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.403212    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.403212    8476 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:27:39.407208    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.440282    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.440822    8476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:27:39.440822    8476 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:27:39.441138    8476 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:27:39.446529    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:27:39.559260    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:39.559320    8476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:27:39.559347    8476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-416400 NodeName:kubenet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:27:39.559347    8476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:27:39.563035    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:27:39.576055    8476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:27:39.580043    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:27:39.597066    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1213 10:27:39.616038    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:27:39.638041    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:27:39.672042    8476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:27:39.680043    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.700046    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:39.887167    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:27:39.917364    8476 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400 for IP: 192.168.85.2
	I1213 10:27:39.917364    8476 certs.go:195] generating shared ca certs ...
	I1213 10:27:39.917364    8476 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:27:39.918062    8476 certs.go:257] generating profile certs ...
	I1213 10:27:39.918912    8476 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key
	I1213 10:27:39.918966    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt with IP's: []
	I1213 10:27:39.969525    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt ...
	I1213 10:27:39.969525    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt: {Name:mkded0c3a33573ddb9efde80db53622d23beebc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key ...
	I1213 10:27:39.970523    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key: {Name:mkddb0c680c1cfbc7fb76412dc59f990aa3351fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6
	I1213 10:27:39.970523    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:27:40.148355    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 ...
	I1213 10:27:40.148355    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6: {Name:mkb638048bd89c15c2729273b91ace1d4490353e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.148703    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 ...
	I1213 10:27:40.148703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6: {Name:mk4e2e28e87911a65a5741680815685d917d2bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.149871    8476 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt
	I1213 10:27:40.164141    8476 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key
	I1213 10:27:40.165495    8476 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key
	I1213 10:27:40.165495    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt with IP's: []
	I1213 10:27:40.389110    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt ...
	I1213 10:27:40.389110    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt: {Name:mk9ea56953d9936fd5e08b8dc707cf8c179327b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.390173    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key ...
	I1213 10:27:40.390173    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key: {Name:mk1d05f99191685ca712d4d7978411bd7096c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:27:40.404560    8476 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:27:40.406555    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:27:40.441360    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:27:40.476758    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:27:40.508936    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:27:40.539795    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:27:40.569170    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:27:40.700611    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:27:40.735214    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:27:40.767361    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:27:40.807746    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:27:40.841101    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:27:40.876541    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:27:40.905929    8476 ssh_runner.go:195] Run: openssl version
	I1213 10:27:40.919422    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.935412    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:27:40.958800    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.966774    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.970772    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:27:41.020692    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.042422    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.062440    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.083044    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:27:41.101089    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.109913    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.115807    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.166390    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:27:41.184269    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:27:41.205563    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.225153    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:27:41.244522    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.255274    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.258261    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.337148    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:27:41.361850    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:27:41.386416    8476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:27:41.397702    8476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:27:41.398038    8476 kubeadm.go:401] StartCluster: {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:41.402376    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:27:41.436826    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:27:41.456770    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:27:41.472386    8476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:27:41.476747    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:27:41.495422    8476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:27:41.495422    8476 kubeadm.go:158] found existing configuration files:
	
	I1213 10:27:41.499410    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:27:41.516241    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:27:41.521896    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:27:41.541264    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:27:41.558570    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:27:41.564101    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:27:41.584137    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.604304    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:27:41.610955    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.630902    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:27:41.645473    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:27:41.649275    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:27:41.666272    8476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:27:41.782563    8476 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:27:41.788925    8476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:27:41.907030    8476 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:57.321321    8476 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:27:57.321858    8476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:27:57.322090    8476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:27:57.322290    8476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:27:57.322547    8476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:27:57.322713    8476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:27:57.327382    8476 out.go:252]   - Generating certificates and keys ...
	I1213 10:27:57.327382    8476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:27:57.327991    8476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:57.329956    8476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:57.333993    8476 out.go:252]   - Booting up control plane ...
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:57.334957    8476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.499474ms
	I1213 10:27:57.334957    8476 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.506067897s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.281282907s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.504426001s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:27:57.336957    8476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:27:57.336957    8476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:27:57.336957    8476 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:27:57.336957    8476 kubeadm.go:319] [bootstrap-token] Using token: fr9253.a366cb10hxgbs57g
	I1213 10:27:57.338959    8476 out.go:252]   - Configuring RBAC rules ...
	I1213 10:27:57.338959    8476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:27:57.340953    8476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:27:57.341967    8476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:27:57.341967    8476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:27:57.341967    8476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--control-plane 
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:27:57.342958    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:57.342958    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-416400 minikube.k8s.io/updated_at=2025_12_13T10_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kubenet-416400 minikube.k8s.io/primary=true
	I1213 10:27:57.359965    8476 ops.go:34] apiserver oom_adj: -16
	I1213 10:27:57.481312    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.982343    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.481678    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.981222    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.482569    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.981670    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.482737    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.667261    8476 kubeadm.go:1114] duration metric: took 3.3242542s to wait for elevateKubeSystemPrivileges
	I1213 10:28:00.667261    8476 kubeadm.go:403] duration metric: took 19.2689858s to StartCluster
	I1213 10:28:00.667261    8476 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.667261    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:28:00.668362    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.670249    8476 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:28:00.670405    8476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:28:00.670495    8476 addons.go:70] Setting storage-provisioner=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:239] Setting addon storage-provisioner=true in "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:70] Setting default-storageclass=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-416400"
	I1213 10:28:00.670495    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.670495    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:28:00.670296    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:28:00.672621    8476 out.go:179] * Verifying Kubernetes components...
	I1213 10:28:00.680707    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.681870    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:28:00.683512    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.745823    8476 addons.go:239] Setting addon default-storageclass=true in "kubenet-416400"
	I1213 10:28:00.745823    8476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:00.745823    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.747823    8476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:00.747823    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:28:00.751823    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.752838    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.805827    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.806835    8476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:00.806835    8476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:28:00.809826    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.859695    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.877310    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:28:01.093206    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:28:01.096660    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:01.289059    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:01.688169    8476 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:28:01.693138    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:01.748392    8476 node_ready.go:35] waiting up to 15m0s for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.777235    8476 node_ready.go:49] node "kubenet-416400" is "Ready"
	I1213 10:28:01.777235    8476 node_ready.go:38] duration metric: took 28.7755ms for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.778242    8476 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:28:01.782492    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:02.197568    8476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-416400" context rescaled to 1 replicas
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053978s)
	I1213 10:28:02.343589    8476 api_server.go:72] duration metric: took 1.673269s to wait for apiserver process to appear ...
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246374s)
	I1213 10:28:02.343677    8476 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:28:02.343720    8476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55078/healthz ...
	I1213 10:28:02.352594    8476 api_server.go:279] https://127.0.0.1:55078/healthz returned 200:
	ok
	I1213 10:28:02.355060    8476 api_server.go:141] control plane version: v1.34.2
	I1213 10:28:02.355060    8476 api_server.go:131] duration metric: took 11.3397ms to wait for apiserver health ...
	I1213 10:28:02.355060    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:28:02.363052    8476 system_pods.go:59] 8 kube-system pods found
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.363052    8476 system_pods.go:61] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.363052    8476 system_pods.go:74] duration metric: took 7.9926ms to wait for pod list to return data ...
	I1213 10:28:02.363052    8476 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:28:02.363944    8476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:28:02.368689    8476 default_sa.go:45] found service account: "default"
	I1213 10:28:02.368689    8476 default_sa.go:55] duration metric: took 5.6365ms for default service account to be created ...
	I1213 10:28:02.368689    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:28:02.368892    8476 addons.go:530] duration metric: took 1.6984619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:28:02.374322    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.374322    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.374322    8476 retry.go:31] will retry after 257.90094ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.647317    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.647382    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.647496    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.647496    8476 retry.go:31] will retry after 305.033982ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.960601    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.960642    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.960780    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.960803    8476 retry.go:31] will retry after 352.340429ms: missing components: kube-dns, kube-proxy
	I1213 10:28:03.376766    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.376766    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.377765    8476 retry.go:31] will retry after 379.080105ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.767616    8476 system_pods.go:86] 7 kube-system pods found
	I1213 10:28:03.767736    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.767736    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running
	I1213 10:28:03.767836    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.767860    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.767920    8476 system_pods.go:126] duration metric: took 1.399211s to wait for k8s-apps to be running ...
	I1213 10:28:03.767952    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:28:03.772800    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:28:03.793452    8476 system_svc.go:56] duration metric: took 25.5002ms WaitForService to wait for kubelet
	I1213 10:28:03.793452    8476 kubeadm.go:587] duration metric: took 3.1231108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:28:03.793452    8476 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:28:03.799850    8476 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:28:03.799942    8476 node_conditions.go:123] node cpu capacity is 16
	I1213 10:28:03.799942    8476 node_conditions.go:105] duration metric: took 6.4898ms to run NodePressure ...
	I1213 10:28:03.800002    8476 start.go:242] waiting for startup goroutines ...
	I1213 10:28:03.800002    8476 start.go:247] waiting for cluster config update ...
	I1213 10:28:03.800034    8476 start.go:256] writing updated cluster config ...
	I1213 10:28:03.805062    8476 ssh_runner.go:195] Run: rm -f paused
	I1213 10:28:03.812457    8476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:03.818438    8476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:28:05.831273    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:08.330368    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:10.333290    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:12.830400    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:14.830848    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:17.329771    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 
	W1213 10:28:19.831718    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:21.833516    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725825301Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725986416Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725998417Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726003718Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726009218Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726219138Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726398555Z" level=info msg="Initializing buildkit"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.844000659Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.850793321Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851043146Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851051346Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851065248Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:22:16 newest-cni-307000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:25.285674   19635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:25.286974   19635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:25.288739   19635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:25.290734   19635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:25.293562   19635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347224] CPU: 1 PID: 487650 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f03540a7b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f03540a7af6.
	[  +0.000001] RSP: 002b:00007fff4615c900 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.848535] CPU: 14 PID: 487834 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f24bdd40b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f24bdd40af6.
	[  +0.000001] RSP: 002b:00007ffcef45f750 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.262444] tmpfs: Unknown parameter 'noswap'
	[ +10.454536] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:28:25 up  2:04,  0 user,  load average: 2.79, 3.75, 3.63
	Linux newest-cni-307000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:22 newest-cni-307000 kubelet[19468]: E1213 10:28:22.946268   19468 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:22 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:23 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 10:28:23 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:23 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:23 newest-cni-307000 kubelet[19481]: E1213 10:28:23.678553   19481 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:23 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:23 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:24 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 10:28:24 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:24 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:24 newest-cni-307000 kubelet[19510]: E1213 10:28:24.409918   19510 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:24 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:24 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:25 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 10:28:25 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:25 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:25 newest-cni-307000 kubelet[19598]: E1213 10:28:25.170058   19598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:25 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:25 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (570.8517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-307000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (380.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:26:43.298455    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.305037    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.316738    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.338243    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.380001    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.461425    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.623316    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:43.944986    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:26:44.587711    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:45.869939    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.431792    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:27:06.898892    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:06.905767    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:06.917096    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:06.940085    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:06.982636    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:07.064830    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:07.226432    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:07.548084    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:27:17.158172    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:27:24.287146    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:27.400203    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:27:36.724021    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:27:47.882682    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:28:05.249452    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:29:05.572779    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:08.135132    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:13.257467    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:29:18.143367    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:23.500167    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:29:34.587106    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:35.228518    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:36.511156    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:36.891485    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:39.072862    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:43.982248    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:44.195554    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:29:45.851252    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:46.066619    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:50.768713    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:54.437533    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:30:14.920191    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:30:22.032783    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:30:24.944659    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:30:55.883057    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:30:59.963560    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:11.489066    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.496357    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.507963    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.529575    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.571939    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.654044    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:11.816077    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:12.137916    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:12.780108    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:14.061866    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:16.624223    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:17.850869    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:21.745962    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:31.988645    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:43.302761    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:45.111944    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:46.868790    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:52.470648    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:53.933409    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:53.940473    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:53.952266    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:53.974497    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:54.016686    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:54.098317    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:54.260186    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:54.582298    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:31:55.223979    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:56.505644    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:31:59.067861    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:04.189961    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:32:06.903349    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:11.025261    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:12.557321    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:14.432268    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:32:17.806273    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:19.813604    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:32:33.432657    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:34.612984    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:34.914468    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:32:36.728160    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:40.925233    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:33:15.876926    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:33:42.410117    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.417097    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.429450    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.451190    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.493550    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.575681    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:42.737714    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:43.059589    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:43.701692    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:44.983335    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:33:47.545415    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:52.667730    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:55.356176    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:34:02.910311    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:34:03.001834    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:34:18.147447    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:34:23.393280    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:34:30.713880    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:34:33.941934    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:34:36.895576    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:34:37.800067    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:34:46.070325    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:35:01.651718    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
E1213 10:35:04.356097    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 2 (603.6625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 410406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:19:47.312495248Z",
	            "FinishedAt": "2025-12-13T10:19:43.959791267Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "202edcc07e78147ef811fd01911ae5ff35d0d9d006f45e69c81f5303ddbf73f3",
	            "SandboxKey": "/var/run/docker/netns/202edcc07e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53491"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "5315c65ac1c1a0593e57f42a5908d620f4852bb681cd15a9c6018ed864a9d80f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 2 (598.6286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.2016384s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-416400 sudo iptables -t nat -L -n -v                                 │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status kubelet --all --full --no-pager         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat kubelet --no-pager                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status docker --all --full --no-pager          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat docker --no-pager                          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/docker/daemon.json                              │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo docker system info                                       │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat cri-docker --no-pager                      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cri-dockerd --version                                    │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status containerd --all --full --no-pager      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat containerd --no-pager                      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /lib/systemd/system/containerd.service               │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/containerd/config.toml                          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo containerd config dump                                   │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status crio --all --full --no-pager            │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat crio --no-pager                            │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo crio config                                              │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ delete  │ -p kubenet-416400                                                               │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:27:08
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:27:08.467331    8476 out.go:360] Setting OutFile to fd 1212 ...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.510327    8476 out.go:374] Setting ErrFile to fd 1652...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.525338    8476 out.go:368] Setting JSON to false
	I1213 10:27:08.528326    8476 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7435,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:27:08.529330    8476 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:27:08.533334    8476 out.go:179] * [kubenet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:27:08.536332    8476 notify.go:221] Checking for updates...
	I1213 10:27:08.538327    8476 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:27:08.541325    8476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:27:08.543338    8476 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:27:08.545327    8476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:27:08.547331    8476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:27:08.550333    8476 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:27:08.665330    8476 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:27:08.669336    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:08.911222    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:08.888781942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:08.914226    8476 out.go:179] * Using the docker driver based on user configuration
	I1213 10:27:08.917218    8476 start.go:309] selected driver: docker
	I1213 10:27:08.917218    8476 start.go:927] validating driver "docker" against <nil>
	I1213 10:27:08.917218    8476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:27:09.005866    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:09.274907    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:09.25177994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:09.275859    8476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:27:09.275859    8476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:27:09.278852    8476 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:27:09.281854    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:09.281854    8476 start.go:353] cluster config:
	{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:09.284873    8476 out.go:179] * Starting "kubenet-416400" primary control-plane node in "kubenet-416400" cluster
	I1213 10:27:09.288885    8476 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:27:09.290853    8476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:27:09.296882    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:09.296882    8476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:27:09.296882    8476 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:27:09.296882    8476 cache.go:65] Caching tarball of preloaded images
	I1213 10:27:09.297854    8476 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:27:09.297854    8476 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:27:09.297854    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:09.297854    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json: {Name:mk0f8afb036d1878ac71666ce4d58fd434d1389e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:09.364866    8476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:27:09.364866    8476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:27:09.364866    8476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:27:09.364866    8476 start.go:360] acquireMachinesLock for kubenet-416400: {Name:mk28dcadbda914f3b76421bc1eef202d654b5e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:27:09.365883    8476 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-416400"
	I1213 10:27:09.365883    8476 start.go:93] Provisioning new machine with config: &{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:27:09.365883    8476 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:09.368853    8476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:27:09.369855    8476 start.go:159] libmachine.API.Create for "kubenet-416400" (driver="docker")
	I1213 10:27:09.369855    8476 client.go:173] LocalClient.Create starting
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.375556    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:27:09.428532    8476 cli_runner.go:211] docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:27:09.431540    8476 network_create.go:284] running [docker network inspect kubenet-416400] to gather additional debugging logs...
	I1213 10:27:09.431540    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400
	W1213 10:27:09.477538    8476 cli_runner.go:211] docker network inspect kubenet-416400 returned with exit code 1
	I1213 10:27:09.477538    8476 network_create.go:287] error running [docker network inspect kubenet-416400]: docker network inspect kubenet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-416400 not found
	I1213 10:27:09.477538    8476 network_create.go:289] output of [docker network inspect kubenet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-416400 not found
	
	** /stderr **
	I1213 10:27:09.481534    8476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:27:09.553692    8476 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.568537    8476 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.580557    8476 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4c0f0}
	I1213 10:27:09.581551    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:27:09.584547    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.637542    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.637542    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.637542    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:27:09.664108    8476 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.678099    8476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001885710}
	I1213 10:27:09.678099    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:27:09.682098    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.738074    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.738074    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.738074    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:27:09.757990    8476 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.771930    8476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001910480}
	I1213 10:27:09.772001    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:27:09.775120    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	I1213 10:27:09.917706    8476 network_create.go:108] docker network kubenet-416400 192.168.85.0/24 created
	I1213 10:27:09.917706    8476 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-416400" container
	I1213 10:27:09.926674    8476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:27:09.990344    8476 cli_runner.go:164] Run: docker volume create kubenet-416400 --label name.minikube.sigs.k8s.io=kubenet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:27:10.043336    8476 oci.go:103] Successfully created a docker volume kubenet-416400
	I1213 10:27:10.046336    8476 cli_runner.go:164] Run: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:27:11.508914    8476 cli_runner.go:217] Completed: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4625571s)
	I1213 10:27:11.508914    8476 oci.go:107] Successfully prepared a docker volume kubenet-416400
	I1213 10:27:11.508914    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:11.508914    8476 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:27:11.513316    8476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:26.833120    8476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.3195505s)
	I1213 10:27:26.833120    8476 kic.go:203] duration metric: took 15.3239811s to extract preloaded images to volume ...
	I1213 10:27:26.839444    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:27.097722    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:27.079878659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:27.101719    8476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:27:27.338932    8476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-416400 --name kubenet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-416400 --network kubenet-416400 --ip 192.168.85.2 --volume kubenet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:27:28.058796    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Running}}
	I1213 10:27:28.125687    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.182686    8476 cli_runner.go:164] Run: docker exec kubenet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:27:28.308932    8476 oci.go:144] the created container "kubenet-416400" has a running status.
	I1213 10:27:28.308932    8476 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:28.438434    8476 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:28.513430    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.575704    8476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:27:28.575704    8476 kic_runner.go:114] Args: [docker exec --privileged kubenet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:27:28.715410    8476 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:31.090843    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:31.148980    8476 machine.go:94] provisionDockerMachine start ...
	I1213 10:27:31.152618    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.213696    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.227691    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.227691    8476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:27:31.426494    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.426494    8476 ubuntu.go:182] provisioning hostname "kubenet-416400"
	I1213 10:27:31.430633    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.483323    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.484332    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.484332    8476 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-416400 && echo "kubenet-416400" | sudo tee /etc/hostname
	I1213 10:27:31.695552    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.701394    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.759724    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.759724    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.759724    8476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:27:31.957771    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:27:31.957771    8476 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:27:31.957771    8476 ubuntu.go:190] setting up certificates
	I1213 10:27:31.957771    8476 provision.go:84] configureAuth start
	I1213 10:27:31.961622    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:32.029795    8476 provision.go:143] copyHostCerts
	I1213 10:27:32.030302    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:27:32.030343    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:27:32.030585    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:27:32.031834    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:27:32.031890    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:27:32.032201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:27:32.033307    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:27:32.033341    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:27:32.033717    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:27:32.034519    8476 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-416400 san=[127.0.0.1 192.168.85.2 kubenet-416400 localhost minikube]
	I1213 10:27:32.150424    8476 provision.go:177] copyRemoteCerts
	I1213 10:27:32.155416    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:27:32.160422    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.214413    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:32.367375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:27:32.404881    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:27:32.437627    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:27:32.464627    8476 provision.go:87] duration metric: took 506.8482ms to configureAuth
	I1213 10:27:32.464627    8476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:27:32.465634    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:32.469262    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.530015    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.530111    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.530111    8476 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:27:32.727229    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:27:32.727229    8476 ubuntu.go:71] root file system type: overlay
	I1213 10:27:32.727229    8476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:27:32.730229    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.781835    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.782115    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.782115    8476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:27:32.980566    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:27:32.985113    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:33.047448    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:33.048094    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:33.048138    8476 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:34.752548    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:27:32.964414860 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:27:34.752590    8476 machine.go:97] duration metric: took 3.6035571s to provisionDockerMachine
	I1213 10:27:34.752590    8476 client.go:176] duration metric: took 25.382363s to LocalClient.Create
	I1213 10:27:34.752660    8476 start.go:167] duration metric: took 25.3823991s to libmachine.API.Create "kubenet-416400"
	I1213 10:27:34.752660    8476 start.go:293] postStartSetup for "kubenet-416400" (driver="docker")
	I1213 10:27:34.752689    8476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:27:34.757321    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:27:34.760792    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:34.815346    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:34.967363    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:27:34.976448    8476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:27:34.976489    8476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:27:34.976523    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:27:34.976670    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:27:34.977231    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:27:34.981302    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:27:34.993858    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:27:35.021854    8476 start.go:296] duration metric: took 269.1608ms for postStartSetup
	I1213 10:27:35.027861    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.080870    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:35.089862    8476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:27:35.093865    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.150107    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.268185    8476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:27:35.276190    8476 start.go:128] duration metric: took 25.9099265s to createHost
	I1213 10:27:35.276190    8476 start.go:83] releasing machines lock for "kubenet-416400", held for 25.9099265s
	I1213 10:27:35.279209    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.343302    8476 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:27:35.346842    8476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:27:35.350867    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.352295    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.411739    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.414747    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	W1213 10:27:35.548301    8476 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:27:35.553481    8476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:35.573784    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:27:35.585474    8476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:27:35.589468    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:27:35.633416    8476 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:27:35.633416    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:35.633416    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:35.633416    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:27:35.649009    8476 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:27:35.649009    8476 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:27:35.671618    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:27:35.696739    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:27:35.711492    8476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:27:35.715488    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:27:35.732484    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.752096    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:27:35.772619    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.796702    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:27:35.815300    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:27:35.839600    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:27:35.861332    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:27:35.884116    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:27:35.903094    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:27:35.919226    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.090670    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:27:36.249395    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:36.249395    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:36.253347    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:27:36.275349    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.297606    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:27:36.328195    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.353573    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:27:36.372805    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:27:36.406354    8476 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:27:36.417745    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:27:36.432809    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1213 10:27:36.462872    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:27:36.616454    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:27:36.759020    8476 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:27:36.759020    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:27:36.784951    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:27:36.811665    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.964769    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:27:37.921141    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:27:37.944144    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:27:37.967237    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:37.988498    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:27:38.188916    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:27:38.358397    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.521403    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:27:38.546402    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:27:38.569221    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.730646    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:27:38.878189    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:38.898180    8476 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:27:38.902189    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:27:38.911194    8476 start.go:564] Will wait 60s for crictl version
	I1213 10:27:38.916189    8476 ssh_runner.go:195] Run: which crictl
	I1213 10:27:38.926186    8476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:27:38.973186    8476 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:27:38.978795    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:39.038631    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:39.102779    8476 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:27:39.107988    8476 cli_runner.go:164] Run: docker exec -t kubenet-416400 dig +short host.docker.internal
	I1213 10:27:39.257345    8476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:27:39.260347    8476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:27:39.268341    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.287341    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:39.347887    8476 kubeadm.go:884] updating cluster {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:27:39.347887    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:39.352726    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.403212    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.403212    8476 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:27:39.407208    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.440282    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.440822    8476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:27:39.440822    8476 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:27:39.441138    8476 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:27:39.446529    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:27:39.559260    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:39.559320    8476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:27:39.559347    8476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-416400 NodeName:kubenet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:27:39.559347    8476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:27:39.563035    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:27:39.576055    8476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:27:39.580043    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:27:39.597066    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1213 10:27:39.616038    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:27:39.638041    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:27:39.672042    8476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:27:39.680043    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.700046    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:39.887167    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:27:39.917364    8476 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400 for IP: 192.168.85.2
	I1213 10:27:39.917364    8476 certs.go:195] generating shared ca certs ...
	I1213 10:27:39.917364    8476 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:27:39.918062    8476 certs.go:257] generating profile certs ...
	I1213 10:27:39.918912    8476 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key
	I1213 10:27:39.918966    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt with IP's: []
	I1213 10:27:39.969525    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt ...
	I1213 10:27:39.969525    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt: {Name:mkded0c3a33573ddb9efde80db53622d23beebc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key ...
	I1213 10:27:39.970523    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key: {Name:mkddb0c680c1cfbc7fb76412dc59f990aa3351fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6
	I1213 10:27:39.970523    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:27:40.148355    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 ...
	I1213 10:27:40.148355    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6: {Name:mkb638048bd89c15c2729273b91ace1d4490353e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.148703    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 ...
	I1213 10:27:40.148703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6: {Name:mk4e2e28e87911a65a5741680815685d917d2bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.149871    8476 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt
	I1213 10:27:40.164141    8476 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key
	I1213 10:27:40.165495    8476 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key
	I1213 10:27:40.165495    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt with IP's: []
	I1213 10:27:40.389110    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt ...
	I1213 10:27:40.389110    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt: {Name:mk9ea56953d9936fd5e08b8dc707cf8c179327b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.390173    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key ...
	I1213 10:27:40.390173    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key: {Name:mk1d05f99191685ca712d4d7978411bd7096c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:27:40.404560    8476 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:27:40.406555    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:27:40.441360    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:27:40.476758    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:27:40.508936    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:27:40.539795    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:27:40.569170    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:27:40.700611    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:27:40.735214    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:27:40.767361    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:27:40.807746    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:27:40.841101    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:27:40.876541    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:27:40.905929    8476 ssh_runner.go:195] Run: openssl version
	I1213 10:27:40.919422    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.935412    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:27:40.958800    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.966774    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.970772    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:27:41.020692    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.042422    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.062440    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.083044    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:27:41.101089    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.109913    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.115807    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.166390    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:27:41.184269    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:27:41.205563    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.225153    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:27:41.244522    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.255274    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.258261    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.337148    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:27:41.361850    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:27:41.386416    8476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:27:41.397702    8476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:27:41.398038    8476 kubeadm.go:401] StartCluster: {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:41.402376    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:27:41.436826    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:27:41.456770    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:27:41.472386    8476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:27:41.476747    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:27:41.495422    8476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:27:41.495422    8476 kubeadm.go:158] found existing configuration files:
	
	I1213 10:27:41.499410    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:27:41.516241    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:27:41.521896    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:27:41.541264    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:27:41.558570    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:27:41.564101    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:27:41.584137    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.604304    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:27:41.610955    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.630902    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:27:41.645473    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:27:41.649275    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:27:41.666272    8476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:27:41.782563    8476 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:27:41.788925    8476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:27:41.907030    8476 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:57.321321    8476 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:27:57.321858    8476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:27:57.322090    8476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:27:57.322290    8476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:27:57.322547    8476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:27:57.322713    8476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:27:57.327382    8476 out.go:252]   - Generating certificates and keys ...
	I1213 10:27:57.327382    8476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:27:57.327991    8476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:57.329956    8476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:57.333993    8476 out.go:252]   - Booting up control plane ...
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:57.334957    8476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.499474ms
	I1213 10:27:57.334957    8476 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.506067897s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.281282907s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.504426001s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:27:57.336957    8476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:27:57.336957    8476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:27:57.336957    8476 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:27:57.336957    8476 kubeadm.go:319] [bootstrap-token] Using token: fr9253.a366cb10hxgbs57g
	I1213 10:27:57.338959    8476 out.go:252]   - Configuring RBAC rules ...
	I1213 10:27:57.338959    8476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:27:57.340953    8476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:27:57.341967    8476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:27:57.341967    8476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:27:57.341967    8476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--control-plane 
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:27:57.342958    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:57.342958    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-416400 minikube.k8s.io/updated_at=2025_12_13T10_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kubenet-416400 minikube.k8s.io/primary=true
	I1213 10:27:57.359965    8476 ops.go:34] apiserver oom_adj: -16
	I1213 10:27:57.481312    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.982343    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.481678    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.981222    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.482569    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.981670    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.482737    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.667261    8476 kubeadm.go:1114] duration metric: took 3.3242542s to wait for elevateKubeSystemPrivileges
	I1213 10:28:00.667261    8476 kubeadm.go:403] duration metric: took 19.2689858s to StartCluster
	I1213 10:28:00.667261    8476 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.667261    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:28:00.668362    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.670249    8476 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:28:00.670405    8476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:28:00.670495    8476 addons.go:70] Setting storage-provisioner=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:239] Setting addon storage-provisioner=true in "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:70] Setting default-storageclass=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-416400"
	I1213 10:28:00.670495    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.670495    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:28:00.670296    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:28:00.672621    8476 out.go:179] * Verifying Kubernetes components...
	I1213 10:28:00.680707    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.681870    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:28:00.683512    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.745823    8476 addons.go:239] Setting addon default-storageclass=true in "kubenet-416400"
	I1213 10:28:00.745823    8476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:00.745823    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.747823    8476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:00.747823    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:28:00.751823    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.752838    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.805827    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.806835    8476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:00.806835    8476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:28:00.809826    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.859695    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.877310    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:28:01.093206    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:28:01.096660    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:01.289059    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:01.688169    8476 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:28:01.693138    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:01.748392    8476 node_ready.go:35] waiting up to 15m0s for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.777235    8476 node_ready.go:49] node "kubenet-416400" is "Ready"
	I1213 10:28:01.777235    8476 node_ready.go:38] duration metric: took 28.7755ms for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.778242    8476 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:28:01.782492    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:02.197568    8476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-416400" context rescaled to 1 replicas
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053978s)
	I1213 10:28:02.343589    8476 api_server.go:72] duration metric: took 1.673269s to wait for apiserver process to appear ...
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246374s)
	I1213 10:28:02.343677    8476 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:28:02.343720    8476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55078/healthz ...
	I1213 10:28:02.352594    8476 api_server.go:279] https://127.0.0.1:55078/healthz returned 200:
	ok
	I1213 10:28:02.355060    8476 api_server.go:141] control plane version: v1.34.2
	I1213 10:28:02.355060    8476 api_server.go:131] duration metric: took 11.3397ms to wait for apiserver health ...
	I1213 10:28:02.355060    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:28:02.363052    8476 system_pods.go:59] 8 kube-system pods found
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.363052    8476 system_pods.go:61] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.363052    8476 system_pods.go:74] duration metric: took 7.9926ms to wait for pod list to return data ...
	I1213 10:28:02.363052    8476 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:28:02.363944    8476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:28:02.368689    8476 default_sa.go:45] found service account: "default"
	I1213 10:28:02.368689    8476 default_sa.go:55] duration metric: took 5.6365ms for default service account to be created ...
	I1213 10:28:02.368689    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:28:02.368892    8476 addons.go:530] duration metric: took 1.6984619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:28:02.374322    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.374322    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.374322    8476 retry.go:31] will retry after 257.90094ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.647317    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.647382    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.647496    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.647496    8476 retry.go:31] will retry after 305.033982ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.960601    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.960642    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.960780    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.960803    8476 retry.go:31] will retry after 352.340429ms: missing components: kube-dns, kube-proxy
	I1213 10:28:03.376766    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.376766    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.377765    8476 retry.go:31] will retry after 379.080105ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.767616    8476 system_pods.go:86] 7 kube-system pods found
	I1213 10:28:03.767736    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.767736    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running
	I1213 10:28:03.767836    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.767860    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.767920    8476 system_pods.go:126] duration metric: took 1.399211s to wait for k8s-apps to be running ...
	I1213 10:28:03.767952    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:28:03.772800    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:28:03.793452    8476 system_svc.go:56] duration metric: took 25.5002ms WaitForService to wait for kubelet
	I1213 10:28:03.793452    8476 kubeadm.go:587] duration metric: took 3.1231108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:28:03.793452    8476 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:28:03.799850    8476 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:28:03.799942    8476 node_conditions.go:123] node cpu capacity is 16
	I1213 10:28:03.799942    8476 node_conditions.go:105] duration metric: took 6.4898ms to run NodePressure ...
	I1213 10:28:03.800002    8476 start.go:242] waiting for startup goroutines ...
	I1213 10:28:03.800002    8476 start.go:247] waiting for cluster config update ...
	I1213 10:28:03.800034    8476 start.go:256] writing updated cluster config ...
	I1213 10:28:03.805062    8476 ssh_runner.go:195] Run: rm -f paused
	I1213 10:28:03.812457    8476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:03.818438    8476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:28:05.831273    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:08.330368    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:10.333290    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:12.830400    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:14.830848    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:17.329771    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 
	W1213 10:28:19.831718    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:21.833516    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:24.330384    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:26.331207    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:28.332900    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:30.334351    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:32.835020    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:35.333186    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:37.333782    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:39.834925    8476 pod_ready.go:94] pod "coredns-66bc5c9577-pzlst" is "Ready"
	I1213 10:28:39.834966    8476 pod_ready.go:86] duration metric: took 36.0154698s for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.845165    8476 pod_ready.go:83] waiting for pod "etcd-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.854539    8476 pod_ready.go:94] pod "etcd-kubenet-416400" is "Ready"
	I1213 10:28:39.855541    8476 pod_ready.go:86] duration metric: took 10.3407ms for pod "etcd-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.863535    8476 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.871543    8476 pod_ready.go:94] pod "kube-apiserver-kubenet-416400" is "Ready"
	I1213 10:28:39.871543    8476 pod_ready.go:86] duration metric: took 8.0079ms for pod "kube-apiserver-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.874535    8476 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.025973    8476 pod_ready.go:94] pod "kube-controller-manager-kubenet-416400" is "Ready"
	I1213 10:28:40.025973    8476 pod_ready.go:86] duration metric: took 151.4354ms for pod "kube-controller-manager-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.229779    8476 pod_ready.go:83] waiting for pod "kube-proxy-7bdqb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.625654    8476 pod_ready.go:94] pod "kube-proxy-7bdqb" is "Ready"
	I1213 10:28:40.625654    8476 pod_ready.go:86] duration metric: took 395.7533ms for pod "kube-proxy-7bdqb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.828010    8476 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:41.225199    8476 pod_ready.go:94] pod "kube-scheduler-kubenet-416400" is "Ready"
	I1213 10:28:41.225199    8476 pod_ready.go:86] duration metric: took 397.0906ms for pod "kube-scheduler-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:41.225199    8476 pod_ready.go:40] duration metric: took 37.4121912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:41.318573    8476 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 10:28:41.321589    8476 out.go:179] * Done! kubectl is now configured to use "kubenet-416400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519842040Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519963651Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519978553Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519984253Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519989854Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520014956Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520057560Z" level=info msg="Initializing buildkit"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.639585638Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645206773Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645396691Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645511202Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:19:56 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645529304Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:35:06.177457   17408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:35:06.180559   17408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:35:06.182194   17408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:35:06.183367   17408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:35:06.184368   17408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347224] CPU: 1 PID: 487650 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f03540a7b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f03540a7af6.
	[  +0.000001] RSP: 002b:00007fff4615c900 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.848535] CPU: 14 PID: 487834 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f24bdd40b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f24bdd40af6.
	[  +0.000001] RSP: 002b:00007ffcef45f750 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.262444] tmpfs: Unknown parameter 'noswap'
	[ +10.454536] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:35:06 up  2:11,  0 user,  load average: 0.14, 1.22, 2.49
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:35:02 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:03 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 13 10:35:03 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:03 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:03 no-preload-803600 kubelet[17217]: E1213 10:35:03.633919   17217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:03 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:03 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:04 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 13 10:35:04 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:04 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:04 no-preload-803600 kubelet[17239]: E1213 10:35:04.381420   17239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:04 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:04 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:05 no-preload-803600 kubelet[17269]: E1213 10:35:05.144248   17269 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:05 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:05 no-preload-803600 kubelet[17318]: E1213 10:35:05.884534   17318 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:05 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 2 (591.7542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-307000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (588.6356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-307000 -n newest-cni-307000
E1213 10:28:28.845772    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (582.3684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-307000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (598.5661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-307000 -n newest-cni-307000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (584.2779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307000
helpers_test.go:244: (dbg) docker inspect newest-cni-307000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e",
	        "Created": "2025-12-13T10:11:37.912113644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:07.362257704Z",
	            "FinishedAt": "2025-12-13T10:22:04.657974104Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e-json.log",
	        "Name": "/newest-cni-307000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-307000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307000",
	                "Source": "/var/lib/docker/volumes/newest-cni-307000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307000",
	                "name.minikube.sigs.k8s.io": "newest-cni-307000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac71d39e43dea35bc9d6021f600e0a448ae9dca45dd0a410ca179f856b12121e",
	            "SandboxKey": "/var/run/docker/netns/ac71d39e43de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53943"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-307000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "091d798055d24cd11a8819044665f960a2f1124bb052fb661c5793e42aeec481",
	                    "EndpointID": "d344064538b6f36208f8c5d92ef1203acaac8ed63c99703b04ed68908d156813",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307000",
	                        "cc243490f404"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (567.9965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25: (1.2428498s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-416400 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status docker --all --full --no-pager          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat docker --no-pager                          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/docker/daemon.json                              │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo docker system info                                       │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat cri-docker --no-pager                      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cri-dockerd --version                                    │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status containerd --all --full --no-pager      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat containerd --no-pager                      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /lib/systemd/system/containerd.service               │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/containerd/config.toml                          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo containerd config dump                                   │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status crio --all --full --no-pager            │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ ssh     │ -p bridge-416400 sudo systemctl cat crio --no-pager                            │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo crio config                                              │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ delete  │ -p bridge-416400                                                               │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ image   │ newest-cni-307000 image list --format=json                                     │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ pause   │ -p newest-cni-307000 --alsologtostderr -v=1                                    │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ unpause │ -p newest-cni-307000 --alsologtostderr -v=1                                    │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:27:08
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:27:08.467331    8476 out.go:360] Setting OutFile to fd 1212 ...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.510327    8476 out.go:374] Setting ErrFile to fd 1652...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.525338    8476 out.go:368] Setting JSON to false
	I1213 10:27:08.528326    8476 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7435,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:27:08.529330    8476 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:27:08.533334    8476 out.go:179] * [kubenet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:27:08.536332    8476 notify.go:221] Checking for updates...
	I1213 10:27:08.538327    8476 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:27:08.541325    8476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:27:08.543338    8476 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:27:08.545327    8476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:27:08.547331    8476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:27:08.550333    8476 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:27:08.665330    8476 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:27:08.669336    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:08.911222    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:08.888781942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:08.914226    8476 out.go:179] * Using the docker driver based on user configuration
	I1213 10:27:08.917218    8476 start.go:309] selected driver: docker
	I1213 10:27:08.917218    8476 start.go:927] validating driver "docker" against <nil>
	I1213 10:27:08.917218    8476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:27:09.005866    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:09.274907    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:09.25177994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:09.275859    8476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:27:09.275859    8476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:27:09.278852    8476 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:27:09.281854    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:09.281854    8476 start.go:353] cluster config:
	{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:09.284873    8476 out.go:179] * Starting "kubenet-416400" primary control-plane node in "kubenet-416400" cluster
	I1213 10:27:09.288885    8476 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:27:09.290853    8476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:27:09.296882    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:09.296882    8476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:27:09.296882    8476 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:27:09.296882    8476 cache.go:65] Caching tarball of preloaded images
	I1213 10:27:09.297854    8476 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:27:09.297854    8476 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:27:09.297854    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:09.297854    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json: {Name:mk0f8afb036d1878ac71666ce4d58fd434d1389e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:09.364866    8476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:27:09.364866    8476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:27:09.364866    8476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:27:09.364866    8476 start.go:360] acquireMachinesLock for kubenet-416400: {Name:mk28dcadbda914f3b76421bc1eef202d654b5e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:27:09.365883    8476 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-416400"
	I1213 10:27:09.365883    8476 start.go:93] Provisioning new machine with config: &{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:27:09.365883    8476 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:09.368853    8476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:27:09.369855    8476 start.go:159] libmachine.API.Create for "kubenet-416400" (driver="docker")
	I1213 10:27:09.369855    8476 client.go:173] LocalClient.Create starting
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.375556    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:27:09.428532    8476 cli_runner.go:211] docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:27:09.431540    8476 network_create.go:284] running [docker network inspect kubenet-416400] to gather additional debugging logs...
	I1213 10:27:09.431540    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400
	W1213 10:27:09.477538    8476 cli_runner.go:211] docker network inspect kubenet-416400 returned with exit code 1
	I1213 10:27:09.477538    8476 network_create.go:287] error running [docker network inspect kubenet-416400]: docker network inspect kubenet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-416400 not found
	I1213 10:27:09.477538    8476 network_create.go:289] output of [docker network inspect kubenet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-416400 not found
	
	** /stderr **
	I1213 10:27:09.481534    8476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:27:09.553692    8476 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.568537    8476 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.580557    8476 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4c0f0}
	I1213 10:27:09.581551    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:27:09.584547    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.637542    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.637542    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.637542    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:27:09.664108    8476 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.678099    8476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001885710}
	I1213 10:27:09.678099    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:27:09.682098    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.738074    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.738074    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.738074    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:27:09.757990    8476 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.771930    8476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001910480}
	I1213 10:27:09.772001    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:27:09.775120    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	I1213 10:27:09.917706    8476 network_create.go:108] docker network kubenet-416400 192.168.85.0/24 created
	I1213 10:27:09.917706    8476 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-416400" container
	I1213 10:27:09.926674    8476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:27:09.990344    8476 cli_runner.go:164] Run: docker volume create kubenet-416400 --label name.minikube.sigs.k8s.io=kubenet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:27:10.043336    8476 oci.go:103] Successfully created a docker volume kubenet-416400
	I1213 10:27:10.046336    8476 cli_runner.go:164] Run: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:27:11.508914    8476 cli_runner.go:217] Completed: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4625571s)
	I1213 10:27:11.508914    8476 oci.go:107] Successfully prepared a docker volume kubenet-416400
	I1213 10:27:11.508914    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:11.508914    8476 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:27:11.513316    8476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:26.833120    8476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.3195505s)
	I1213 10:27:26.833120    8476 kic.go:203] duration metric: took 15.3239811s to extract preloaded images to volume ...
	I1213 10:27:26.839444    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:27.097722    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:27.079878659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:27.101719    8476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:27:27.338932    8476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-416400 --name kubenet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-416400 --network kubenet-416400 --ip 192.168.85.2 --volume kubenet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:27:28.058796    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Running}}
	I1213 10:27:28.125687    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.182686    8476 cli_runner.go:164] Run: docker exec kubenet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:27:28.308932    8476 oci.go:144] the created container "kubenet-416400" has a running status.
	I1213 10:27:28.308932    8476 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:28.438434    8476 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:28.513430    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.575704    8476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:27:28.575704    8476 kic_runner.go:114] Args: [docker exec --privileged kubenet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:27:28.715410    8476 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:31.090843    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:31.148980    8476 machine.go:94] provisionDockerMachine start ...
	I1213 10:27:31.152618    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.213696    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.227691    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.227691    8476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:27:31.426494    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.426494    8476 ubuntu.go:182] provisioning hostname "kubenet-416400"
	I1213 10:27:31.430633    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.483323    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.484332    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.484332    8476 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-416400 && echo "kubenet-416400" | sudo tee /etc/hostname
	I1213 10:27:31.695552    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.701394    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.759724    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.759724    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.759724    8476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:27:31.957771    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:27:31.957771    8476 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:27:31.957771    8476 ubuntu.go:190] setting up certificates
	I1213 10:27:31.957771    8476 provision.go:84] configureAuth start
	I1213 10:27:31.961622    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:32.029795    8476 provision.go:143] copyHostCerts
	I1213 10:27:32.030302    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:27:32.030343    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:27:32.030585    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:27:32.031834    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:27:32.031890    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:27:32.032201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:27:32.033307    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:27:32.033341    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:27:32.033717    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:27:32.034519    8476 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-416400 san=[127.0.0.1 192.168.85.2 kubenet-416400 localhost minikube]
	I1213 10:27:32.150424    8476 provision.go:177] copyRemoteCerts
	I1213 10:27:32.155416    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:27:32.160422    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.214413    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:32.367375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:27:32.404881    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:27:32.437627    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:27:32.464627    8476 provision.go:87] duration metric: took 506.8482ms to configureAuth
	I1213 10:27:32.464627    8476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:27:32.465634    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:32.469262    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.530015    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.530111    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.530111    8476 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:27:32.727229    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:27:32.727229    8476 ubuntu.go:71] root file system type: overlay
	I1213 10:27:32.727229    8476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:27:32.730229    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.781835    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.782115    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.782115    8476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:27:32.980566    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:27:32.985113    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:33.047448    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:33.048094    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:33.048138    8476 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:34.752548    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:27:32.964414860 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:27:34.752590    8476 machine.go:97] duration metric: took 3.6035571s to provisionDockerMachine
	I1213 10:27:34.752590    8476 client.go:176] duration metric: took 25.382363s to LocalClient.Create
	I1213 10:27:34.752660    8476 start.go:167] duration metric: took 25.3823991s to libmachine.API.Create "kubenet-416400"
	I1213 10:27:34.752660    8476 start.go:293] postStartSetup for "kubenet-416400" (driver="docker")
	I1213 10:27:34.752689    8476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:27:34.757321    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:27:34.760792    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:34.815346    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:34.967363    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:27:34.976448    8476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:27:34.976489    8476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:27:34.976523    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:27:34.976670    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:27:34.977231    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:27:34.981302    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:27:34.993858    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:27:35.021854    8476 start.go:296] duration metric: took 269.1608ms for postStartSetup
	I1213 10:27:35.027861    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.080870    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:35.089862    8476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:27:35.093865    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.150107    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.268185    8476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:27:35.276190    8476 start.go:128] duration metric: took 25.9099265s to createHost
	I1213 10:27:35.276190    8476 start.go:83] releasing machines lock for "kubenet-416400", held for 25.9099265s
	I1213 10:27:35.279209    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.343302    8476 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:27:35.346842    8476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:27:35.350867    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.352295    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.411739    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.414747    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	W1213 10:27:35.548301    8476 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:27:35.553481    8476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:35.573784    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:27:35.585474    8476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:27:35.589468    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:27:35.633416    8476 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:27:35.633416    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:35.633416    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:35.633416    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:27:35.649009    8476 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:27:35.649009    8476 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:27:35.671618    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:27:35.696739    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:27:35.711492    8476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:27:35.715488    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:27:35.732484    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.752096    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:27:35.772619    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.796702    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:27:35.815300    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:27:35.839600    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:27:35.861332    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:27:35.884116    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:27:35.903094    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:27:35.919226    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.090670    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:27:36.249395    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:36.249395    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:36.253347    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:27:36.275349    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.297606    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:27:36.328195    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.353573    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:27:36.372805    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:27:36.406354    8476 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:27:36.417745    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:27:36.432809    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1213 10:27:36.462872    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:27:36.616454    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:27:36.759020    8476 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:27:36.759020    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:27:36.784951    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:27:36.811665    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.964769    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:27:37.921141    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:27:37.944144    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:27:37.967237    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:37.988498    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:27:38.188916    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:27:38.358397    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.521403    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:27:38.546402    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:27:38.569221    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.730646    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:27:38.878189    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:38.898180    8476 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:27:38.902189    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:27:38.911194    8476 start.go:564] Will wait 60s for crictl version
	I1213 10:27:38.916189    8476 ssh_runner.go:195] Run: which crictl
	I1213 10:27:38.926186    8476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:27:38.973186    8476 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:27:38.978795    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:39.038631    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:39.102779    8476 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:27:39.107988    8476 cli_runner.go:164] Run: docker exec -t kubenet-416400 dig +short host.docker.internal
	I1213 10:27:39.257345    8476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:27:39.260347    8476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:27:39.268341    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.287341    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:39.347887    8476 kubeadm.go:884] updating cluster {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:27:39.347887    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:39.352726    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.403212    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.403212    8476 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:27:39.407208    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.440282    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.440822    8476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:27:39.440822    8476 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:27:39.441138    8476 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:27:39.446529    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:27:39.559260    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:39.559320    8476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:27:39.559347    8476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-416400 NodeName:kubenet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:27:39.559347    8476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:27:39.563035    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:27:39.576055    8476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:27:39.580043    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:27:39.597066    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1213 10:27:39.616038    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:27:39.638041    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:27:39.672042    8476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:27:39.680043    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.700046    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:39.887167    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:27:39.917364    8476 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400 for IP: 192.168.85.2
	I1213 10:27:39.917364    8476 certs.go:195] generating shared ca certs ...
	I1213 10:27:39.917364    8476 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:27:39.918062    8476 certs.go:257] generating profile certs ...
	I1213 10:27:39.918912    8476 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key
	I1213 10:27:39.918966    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt with IP's: []
	I1213 10:27:39.969525    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt ...
	I1213 10:27:39.969525    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt: {Name:mkded0c3a33573ddb9efde80db53622d23beebc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key ...
	I1213 10:27:39.970523    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key: {Name:mkddb0c680c1cfbc7fb76412dc59f990aa3351fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6
	I1213 10:27:39.970523    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:27:40.148355    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 ...
	I1213 10:27:40.148355    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6: {Name:mkb638048bd89c15c2729273b91ace1d4490353e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.148703    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 ...
	I1213 10:27:40.148703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6: {Name:mk4e2e28e87911a65a5741680815685d917d2bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.149871    8476 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt
	I1213 10:27:40.164141    8476 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key
	I1213 10:27:40.165495    8476 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key
	I1213 10:27:40.165495    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt with IP's: []
	I1213 10:27:40.389110    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt ...
	I1213 10:27:40.389110    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt: {Name:mk9ea56953d9936fd5e08b8dc707cf8c179327b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.390173    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key ...
	I1213 10:27:40.390173    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key: {Name:mk1d05f99191685ca712d4d7978411bd7096c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:27:40.404560    8476 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:27:40.406555    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:27:40.441360    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:27:40.476758    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:27:40.508936    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:27:40.539795    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:27:40.569170    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:27:40.700611    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:27:40.735214    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:27:40.767361    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:27:40.807746    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:27:40.841101    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:27:40.876541    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:27:40.905929    8476 ssh_runner.go:195] Run: openssl version
	I1213 10:27:40.919422    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.935412    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:27:40.958800    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.966774    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.970772    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:27:41.020692    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.042422    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.062440    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.083044    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:27:41.101089    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.109913    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.115807    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.166390    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:27:41.184269    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:27:41.205563    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.225153    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:27:41.244522    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.255274    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.258261    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.337148    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:27:41.361850    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:27:41.386416    8476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:27:41.397702    8476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:27:41.398038    8476 kubeadm.go:401] StartCluster: {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:41.402376    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:27:41.436826    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:27:41.456770    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:27:41.472386    8476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:27:41.476747    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:27:41.495422    8476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:27:41.495422    8476 kubeadm.go:158] found existing configuration files:
	
	I1213 10:27:41.499410    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:27:41.516241    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:27:41.521896    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:27:41.541264    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:27:41.558570    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:27:41.564101    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:27:41.584137    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.604304    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:27:41.610955    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.630902    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:27:41.645473    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:27:41.649275    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:27:41.666272    8476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:27:41.782563    8476 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:27:41.788925    8476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:27:41.907030    8476 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:57.321321    8476 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:27:57.321858    8476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:27:57.322090    8476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:27:57.322290    8476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:27:57.322547    8476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:27:57.322713    8476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:27:57.327382    8476 out.go:252]   - Generating certificates and keys ...
	I1213 10:27:57.327382    8476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:27:57.327991    8476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:57.329956    8476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:57.333993    8476 out.go:252]   - Booting up control plane ...
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:57.334957    8476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.499474ms
	I1213 10:27:57.334957    8476 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.506067897s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.281282907s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.504426001s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:27:57.336957    8476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:27:57.336957    8476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:27:57.336957    8476 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:27:57.336957    8476 kubeadm.go:319] [bootstrap-token] Using token: fr9253.a366cb10hxgbs57g
	I1213 10:27:57.338959    8476 out.go:252]   - Configuring RBAC rules ...
	I1213 10:27:57.338959    8476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:27:57.340953    8476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:27:57.341967    8476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:27:57.341967    8476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:27:57.341967    8476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--control-plane 
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:27:57.342958    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:57.342958    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-416400 minikube.k8s.io/updated_at=2025_12_13T10_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kubenet-416400 minikube.k8s.io/primary=true
	I1213 10:27:57.359965    8476 ops.go:34] apiserver oom_adj: -16
	I1213 10:27:57.481312    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.982343    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.481678    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.981222    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.482569    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.981670    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.482737    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.667261    8476 kubeadm.go:1114] duration metric: took 3.3242542s to wait for elevateKubeSystemPrivileges
	I1213 10:28:00.667261    8476 kubeadm.go:403] duration metric: took 19.2689858s to StartCluster
	I1213 10:28:00.667261    8476 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.667261    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:28:00.668362    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.670249    8476 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:28:00.670405    8476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:28:00.670495    8476 addons.go:70] Setting storage-provisioner=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:239] Setting addon storage-provisioner=true in "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:70] Setting default-storageclass=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-416400"
	I1213 10:28:00.670495    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.670495    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:28:00.670296    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:28:00.672621    8476 out.go:179] * Verifying Kubernetes components...
	I1213 10:28:00.680707    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.681870    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:28:00.683512    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.745823    8476 addons.go:239] Setting addon default-storageclass=true in "kubenet-416400"
	I1213 10:28:00.745823    8476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:00.745823    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.747823    8476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:00.747823    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:28:00.751823    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.752838    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.805827    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.806835    8476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:00.806835    8476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:28:00.809826    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.859695    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.877310    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:28:01.093206    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:28:01.096660    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:01.289059    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:01.688169    8476 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:28:01.693138    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:01.748392    8476 node_ready.go:35] waiting up to 15m0s for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.777235    8476 node_ready.go:49] node "kubenet-416400" is "Ready"
	I1213 10:28:01.777235    8476 node_ready.go:38] duration metric: took 28.7755ms for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.778242    8476 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:28:01.782492    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:02.197568    8476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-416400" context rescaled to 1 replicas
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053978s)
	I1213 10:28:02.343589    8476 api_server.go:72] duration metric: took 1.673269s to wait for apiserver process to appear ...
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246374s)
	I1213 10:28:02.343677    8476 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:28:02.343720    8476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55078/healthz ...
	I1213 10:28:02.352594    8476 api_server.go:279] https://127.0.0.1:55078/healthz returned 200:
	ok
	I1213 10:28:02.355060    8476 api_server.go:141] control plane version: v1.34.2
	I1213 10:28:02.355060    8476 api_server.go:131] duration metric: took 11.3397ms to wait for apiserver health ...
	I1213 10:28:02.355060    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:28:02.363052    8476 system_pods.go:59] 8 kube-system pods found
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.363052    8476 system_pods.go:61] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.363052    8476 system_pods.go:74] duration metric: took 7.9926ms to wait for pod list to return data ...
	I1213 10:28:02.363052    8476 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:28:02.363944    8476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:28:02.368689    8476 default_sa.go:45] found service account: "default"
	I1213 10:28:02.368689    8476 default_sa.go:55] duration metric: took 5.6365ms for default service account to be created ...
	I1213 10:28:02.368689    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:28:02.368892    8476 addons.go:530] duration metric: took 1.6984619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:28:02.374322    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.374322    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.374322    8476 retry.go:31] will retry after 257.90094ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.647317    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.647382    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.647496    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.647496    8476 retry.go:31] will retry after 305.033982ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.960601    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.960642    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.960780    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.960803    8476 retry.go:31] will retry after 352.340429ms: missing components: kube-dns, kube-proxy
	I1213 10:28:03.376766    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.376766    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.377765    8476 retry.go:31] will retry after 379.080105ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.767616    8476 system_pods.go:86] 7 kube-system pods found
	I1213 10:28:03.767736    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.767736    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running
	I1213 10:28:03.767836    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.767860    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.767920    8476 system_pods.go:126] duration metric: took 1.399211s to wait for k8s-apps to be running ...
	I1213 10:28:03.767952    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:28:03.772800    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:28:03.793452    8476 system_svc.go:56] duration metric: took 25.5002ms WaitForService to wait for kubelet
	I1213 10:28:03.793452    8476 kubeadm.go:587] duration metric: took 3.1231108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:28:03.793452    8476 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:28:03.799850    8476 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:28:03.799942    8476 node_conditions.go:123] node cpu capacity is 16
	I1213 10:28:03.799942    8476 node_conditions.go:105] duration metric: took 6.4898ms to run NodePressure ...
	I1213 10:28:03.800002    8476 start.go:242] waiting for startup goroutines ...
	I1213 10:28:03.800002    8476 start.go:247] waiting for cluster config update ...
	I1213 10:28:03.800034    8476 start.go:256] writing updated cluster config ...
	I1213 10:28:03.805062    8476 ssh_runner.go:195] Run: rm -f paused
	I1213 10:28:03.812457    8476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:03.818438    8476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:28:05.831273    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:08.330368    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:10.333290    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:12.830400    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:14.830848    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:17.329771    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 
	W1213 10:28:19.831718    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:21.833516    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:24.330384    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:26.331207    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:28.332900    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725825301Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725986416Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725998417Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726003718Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726009218Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726219138Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726398555Z" level=info msg="Initializing buildkit"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.844000659Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.850793321Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851043146Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851051346Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851065248Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:22:16 newest-cni-307000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:32.541940   20035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:32.543088   20035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:32.544232   20035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:32.545119   20035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:32.547916   20035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347224] CPU: 1 PID: 487650 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f03540a7b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f03540a7af6.
	[  +0.000001] RSP: 002b:00007fff4615c900 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.848535] CPU: 14 PID: 487834 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f24bdd40b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f24bdd40af6.
	[  +0.000001] RSP: 002b:00007ffcef45f750 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.262444] tmpfs: Unknown parameter 'noswap'
	[ +10.454536] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:28:32 up  2:04,  0 user,  load average: 2.81, 3.73, 3.62
	Linux newest-cni-307000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:28:29 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:29 newest-cni-307000 kubelet[19829]: E1213 10:28:29.664536   19829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:29 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:29 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:30 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 13 10:28:30 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:30 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:30 newest-cni-307000 kubelet[19857]: E1213 10:28:30.411384   19857 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:30 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:30 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:31 newest-cni-307000 kubelet[19885]: E1213 10:28:31.178828   19885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:31 newest-cni-307000 kubelet[19913]: E1213 10:28:31.933390   19913 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:31 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (593.9565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-307000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-307000
helpers_test.go:244: (dbg) docker inspect newest-cni-307000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e",
	        "Created": "2025-12-13T10:11:37.912113644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:07.362257704Z",
	            "FinishedAt": "2025-12-13T10:22:04.657974104Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e/cc243490f4045f6275620fead3cf743bdcc06793f30944d53d3d0e22c416211e-json.log",
	        "Name": "/newest-cni-307000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-307000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-307000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fd6cedff83bee99df393eab952a55cc2565a988396fbf552640cb0ef5f70bba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-307000",
	                "Source": "/var/lib/docker/volumes/newest-cni-307000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-307000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-307000",
	                "name.minikube.sigs.k8s.io": "newest-cni-307000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac71d39e43dea35bc9d6021f600e0a448ae9dca45dd0a410ca179f856b12121e",
	            "SandboxKey": "/var/run/docker/netns/ac71d39e43de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53942"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53943"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-307000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "091d798055d24cd11a8819044665f960a2f1124bb052fb661c5793e42aeec481",
	                    "EndpointID": "d344064538b6f36208f8c5d92ef1203acaac8ed63c99703b04ed68908d156813",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-307000",
	                        "cc243490f404"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (594.059ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-307000 logs -n 25: (1.1868644s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-416400 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status docker --all --full --no-pager          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat docker --no-pager                          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/docker/daemon.json                              │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo docker system info                                       │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat cri-docker --no-pager                      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cri-dockerd --version                                    │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status containerd --all --full --no-pager      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl cat containerd --no-pager                      │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /lib/systemd/system/containerd.service               │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo cat /etc/containerd/config.toml                          │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo containerd config dump                                   │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo systemctl status crio --all --full --no-pager            │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ ssh     │ -p bridge-416400 sudo systemctl cat crio --no-pager                            │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ ssh     │ -p bridge-416400 sudo crio config                                              │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ delete  │ -p bridge-416400                                                               │ bridge-416400     │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ image   │ newest-cni-307000 image list --format=json                                     │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ pause   │ -p newest-cni-307000 --alsologtostderr -v=1                                    │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ unpause │ -p newest-cni-307000 --alsologtostderr -v=1                                    │ newest-cni-307000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:27:08
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:27:08.467331    8476 out.go:360] Setting OutFile to fd 1212 ...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.510327    8476 out.go:374] Setting ErrFile to fd 1652...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.525338    8476 out.go:368] Setting JSON to false
	I1213 10:27:08.528326    8476 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7435,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:27:08.529330    8476 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:27:08.533334    8476 out.go:179] * [kubenet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:27:08.536332    8476 notify.go:221] Checking for updates...
	I1213 10:27:08.538327    8476 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:27:08.541325    8476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:27:08.543338    8476 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:27:08.545327    8476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:27:08.547331    8476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:27:08.550333    8476 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:27:08.665330    8476 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:27:08.669336    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:08.911222    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:08.888781942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:08.914226    8476 out.go:179] * Using the docker driver based on user configuration
	I1213 10:27:08.917218    8476 start.go:309] selected driver: docker
	I1213 10:27:08.917218    8476 start.go:927] validating driver "docker" against <nil>
	I1213 10:27:08.917218    8476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:27:09.005866    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:09.274907    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:09.25177994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:09.275859    8476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:27:09.275859    8476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:27:09.278852    8476 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:27:09.281854    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:09.281854    8476 start.go:353] cluster config:
	{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:09.284873    8476 out.go:179] * Starting "kubenet-416400" primary control-plane node in "kubenet-416400" cluster
	I1213 10:27:09.288885    8476 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:27:09.290853    8476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:27:09.296882    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:09.296882    8476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:27:09.296882    8476 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:27:09.296882    8476 cache.go:65] Caching tarball of preloaded images
	I1213 10:27:09.297854    8476 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:27:09.297854    8476 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:27:09.297854    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:09.297854    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json: {Name:mk0f8afb036d1878ac71666ce4d58fd434d1389e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:09.364866    8476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:27:09.364866    8476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:27:09.364866    8476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:27:09.364866    8476 start.go:360] acquireMachinesLock for kubenet-416400: {Name:mk28dcadbda914f3b76421bc1eef202d654b5e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:27:09.365883    8476 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-416400"
	I1213 10:27:09.365883    8476 start.go:93] Provisioning new machine with config: &{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:27:09.365883    8476 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:09.368853    8476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:27:09.369855    8476 start.go:159] libmachine.API.Create for "kubenet-416400" (driver="docker")
	I1213 10:27:09.369855    8476 client.go:173] LocalClient.Create starting
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.375556    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:27:09.428532    8476 cli_runner.go:211] docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:27:09.431540    8476 network_create.go:284] running [docker network inspect kubenet-416400] to gather additional debugging logs...
	I1213 10:27:09.431540    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400
	W1213 10:27:09.477538    8476 cli_runner.go:211] docker network inspect kubenet-416400 returned with exit code 1
	I1213 10:27:09.477538    8476 network_create.go:287] error running [docker network inspect kubenet-416400]: docker network inspect kubenet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-416400 not found
	I1213 10:27:09.477538    8476 network_create.go:289] output of [docker network inspect kubenet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-416400 not found
	
	** /stderr **
	I1213 10:27:09.481534    8476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:27:09.553692    8476 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.568537    8476 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.580557    8476 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4c0f0}
	I1213 10:27:09.581551    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:27:09.584547    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.637542    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.637542    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.637542    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:27:09.664108    8476 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.678099    8476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001885710}
	I1213 10:27:09.678099    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:27:09.682098    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.738074    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.738074    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.738074    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:27:09.757990    8476 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.771930    8476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001910480}
	I1213 10:27:09.772001    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:27:09.775120    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	I1213 10:27:09.917706    8476 network_create.go:108] docker network kubenet-416400 192.168.85.0/24 created
	I1213 10:27:09.917706    8476 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-416400" container
	I1213 10:27:09.926674    8476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:27:09.990344    8476 cli_runner.go:164] Run: docker volume create kubenet-416400 --label name.minikube.sigs.k8s.io=kubenet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:27:10.043336    8476 oci.go:103] Successfully created a docker volume kubenet-416400
	I1213 10:27:10.046336    8476 cli_runner.go:164] Run: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:27:11.508914    8476 cli_runner.go:217] Completed: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4625571s)
	I1213 10:27:11.508914    8476 oci.go:107] Successfully prepared a docker volume kubenet-416400
	I1213 10:27:11.508914    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:11.508914    8476 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:27:11.513316    8476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:26.833120    8476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.3195505s)
	I1213 10:27:26.833120    8476 kic.go:203] duration metric: took 15.3239811s to extract preloaded images to volume ...
	I1213 10:27:26.839444    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:27.097722    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:27.079878659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:27.101719    8476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:27:27.338932    8476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-416400 --name kubenet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-416400 --network kubenet-416400 --ip 192.168.85.2 --volume kubenet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:27:28.058796    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Running}}
	I1213 10:27:28.125687    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.182686    8476 cli_runner.go:164] Run: docker exec kubenet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:27:28.308932    8476 oci.go:144] the created container "kubenet-416400" has a running status.
	I1213 10:27:28.308932    8476 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:28.438434    8476 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:28.513430    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.575704    8476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:27:28.575704    8476 kic_runner.go:114] Args: [docker exec --privileged kubenet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:27:28.715410    8476 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:31.090843    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:31.148980    8476 machine.go:94] provisionDockerMachine start ...
	I1213 10:27:31.152618    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.213696    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.227691    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.227691    8476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:27:31.426494    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.426494    8476 ubuntu.go:182] provisioning hostname "kubenet-416400"
	I1213 10:27:31.430633    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.483323    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.484332    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.484332    8476 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-416400 && echo "kubenet-416400" | sudo tee /etc/hostname
	I1213 10:27:31.695552    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.701394    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.759724    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.759724    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.759724    8476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:27:31.957771    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:27:31.957771    8476 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:27:31.957771    8476 ubuntu.go:190] setting up certificates
	I1213 10:27:31.957771    8476 provision.go:84] configureAuth start
	I1213 10:27:31.961622    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:32.029795    8476 provision.go:143] copyHostCerts
	I1213 10:27:32.030302    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:27:32.030343    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:27:32.030585    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:27:32.031834    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:27:32.031890    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:27:32.032201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:27:32.033307    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:27:32.033341    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:27:32.033717    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:27:32.034519    8476 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-416400 san=[127.0.0.1 192.168.85.2 kubenet-416400 localhost minikube]
	I1213 10:27:32.150424    8476 provision.go:177] copyRemoteCerts
	I1213 10:27:32.155416    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:27:32.160422    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.214413    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:32.367375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:27:32.404881    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:27:32.437627    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:27:32.464627    8476 provision.go:87] duration metric: took 506.8482ms to configureAuth
	I1213 10:27:32.464627    8476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:27:32.465634    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:32.469262    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.530015    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.530111    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.530111    8476 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:27:32.727229    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:27:32.727229    8476 ubuntu.go:71] root file system type: overlay
	I1213 10:27:32.727229    8476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:27:32.730229    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.781835    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.782115    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.782115    8476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:27:32.980566    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:27:32.985113    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:33.047448    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:33.048094    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:33.048138    8476 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:34.752548    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:27:32.964414860 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:27:34.752590    8476 machine.go:97] duration metric: took 3.6035571s to provisionDockerMachine
	I1213 10:27:34.752590    8476 client.go:176] duration metric: took 25.382363s to LocalClient.Create
	I1213 10:27:34.752660    8476 start.go:167] duration metric: took 25.3823991s to libmachine.API.Create "kubenet-416400"
	I1213 10:27:34.752660    8476 start.go:293] postStartSetup for "kubenet-416400" (driver="docker")
	I1213 10:27:34.752689    8476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:27:34.757321    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:27:34.760792    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:34.815346    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:34.967363    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:27:34.976448    8476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:27:34.976489    8476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:27:34.976523    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:27:34.976670    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:27:34.977231    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:27:34.981302    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:27:34.993858    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:27:35.021854    8476 start.go:296] duration metric: took 269.1608ms for postStartSetup
	I1213 10:27:35.027861    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.080870    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:35.089862    8476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:27:35.093865    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.150107    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.268185    8476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:27:35.276190    8476 start.go:128] duration metric: took 25.9099265s to createHost
	I1213 10:27:35.276190    8476 start.go:83] releasing machines lock for "kubenet-416400", held for 25.9099265s
	I1213 10:27:35.279209    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.343302    8476 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:27:35.346842    8476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:27:35.350867    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.352295    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.411739    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.414747    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	W1213 10:27:35.548301    8476 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:27:35.553481    8476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:35.573784    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:27:35.585474    8476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:27:35.589468    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:27:35.633416    8476 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:27:35.633416    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:35.633416    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:35.633416    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:27:35.649009    8476 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:27:35.649009    8476 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:27:35.671618    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:27:35.696739    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:27:35.711492    8476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:27:35.715488    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:27:35.732484    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.752096    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:27:35.772619    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.796702    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:27:35.815300    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:27:35.839600    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:27:35.861332    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:27:35.884116    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:27:35.903094    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:27:35.919226    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.090670    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:27:36.249395    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:36.249395    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:36.253347    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:27:36.275349    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.297606    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:27:36.328195    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.353573    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:27:36.372805    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:27:36.406354    8476 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:27:36.417745    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:27:36.432809    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1213 10:27:36.462872    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:27:36.616454    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:27:36.759020    8476 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:27:36.759020    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:27:36.784951    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:27:36.811665    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.964769    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:27:37.921141    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:27:37.944144    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:27:37.967237    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:37.988498    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:27:38.188916    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:27:38.358397    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.521403    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:27:38.546402    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:27:38.569221    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.730646    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:27:38.878189    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:38.898180    8476 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:27:38.902189    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:27:38.911194    8476 start.go:564] Will wait 60s for crictl version
	I1213 10:27:38.916189    8476 ssh_runner.go:195] Run: which crictl
	I1213 10:27:38.926186    8476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:27:38.973186    8476 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:27:38.978795    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:39.038631    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:39.102779    8476 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:27:39.107988    8476 cli_runner.go:164] Run: docker exec -t kubenet-416400 dig +short host.docker.internal
	I1213 10:27:39.257345    8476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:27:39.260347    8476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:27:39.268341    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.287341    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:39.347887    8476 kubeadm.go:884] updating cluster {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:27:39.347887    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:39.352726    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.403212    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.403212    8476 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:27:39.407208    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.440282    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.440822    8476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:27:39.440822    8476 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:27:39.441138    8476 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:27:39.446529    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:27:39.559260    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:39.559320    8476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:27:39.559347    8476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-416400 NodeName:kubenet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:27:39.559347    8476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:27:39.563035    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:27:39.576055    8476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:27:39.580043    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:27:39.597066    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1213 10:27:39.616038    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:27:39.638041    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:27:39.672042    8476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:27:39.680043    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.700046    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:39.887167    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:27:39.917364    8476 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400 for IP: 192.168.85.2
	I1213 10:27:39.917364    8476 certs.go:195] generating shared ca certs ...
	I1213 10:27:39.917364    8476 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:27:39.918062    8476 certs.go:257] generating profile certs ...
	I1213 10:27:39.918912    8476 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key
	I1213 10:27:39.918966    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt with IP's: []
	I1213 10:27:39.969525    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt ...
	I1213 10:27:39.969525    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt: {Name:mkded0c3a33573ddb9efde80db53622d23beebc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key ...
	I1213 10:27:39.970523    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key: {Name:mkddb0c680c1cfbc7fb76412dc59f990aa3351fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6
	I1213 10:27:39.970523    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:27:40.148355    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 ...
	I1213 10:27:40.148355    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6: {Name:mkb638048bd89c15c2729273b91ace1d4490353e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.148703    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 ...
	I1213 10:27:40.148703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6: {Name:mk4e2e28e87911a65a5741680815685d917d2bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.149871    8476 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt
	I1213 10:27:40.164141    8476 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key
	I1213 10:27:40.165495    8476 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key
	I1213 10:27:40.165495    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt with IP's: []
	I1213 10:27:40.389110    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt ...
	I1213 10:27:40.389110    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt: {Name:mk9ea56953d9936fd5e08b8dc707cf8c179327b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.390173    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key ...
	I1213 10:27:40.390173    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key: {Name:mk1d05f99191685ca712d4d7978411bd7096c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:27:40.404560    8476 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:27:40.406555    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:27:40.441360    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:27:40.476758    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:27:40.508936    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:27:40.539795    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:27:40.569170    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:27:40.700611    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:27:40.735214    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:27:40.767361    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:27:40.807746    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:27:40.841101    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:27:40.876541    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:27:40.905929    8476 ssh_runner.go:195] Run: openssl version
	I1213 10:27:40.919422    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.935412    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:27:40.958800    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.966774    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.970772    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:27:41.020692    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.042422    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.062440    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.083044    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:27:41.101089    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.109913    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.115807    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.166390    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:27:41.184269    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:27:41.205563    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.225153    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:27:41.244522    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.255274    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.258261    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.337148    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:27:41.361850    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:27:41.386416    8476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:27:41.397702    8476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:27:41.398038    8476 kubeadm.go:401] StartCluster: {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:41.402376    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:27:41.436826    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:27:41.456770    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:27:41.472386    8476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:27:41.476747    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:27:41.495422    8476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:27:41.495422    8476 kubeadm.go:158] found existing configuration files:
	
	I1213 10:27:41.499410    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:27:41.516241    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:27:41.521896    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:27:41.541264    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:27:41.558570    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:27:41.564101    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:27:41.584137    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.604304    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:27:41.610955    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.630902    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:27:41.645473    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:27:41.649275    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:27:41.666272    8476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:27:41.782563    8476 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:27:41.788925    8476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:27:41.907030    8476 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:57.321321    8476 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:27:57.321858    8476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:27:57.322090    8476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:27:57.322290    8476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:27:57.322547    8476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:27:57.322713    8476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:27:57.327382    8476 out.go:252]   - Generating certificates and keys ...
	I1213 10:27:57.327382    8476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:27:57.327991    8476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:57.329956    8476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:57.333993    8476 out.go:252]   - Booting up control plane ...
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:57.334957    8476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.499474ms
	I1213 10:27:57.334957    8476 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.506067897s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.281282907s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.504426001s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:27:57.336957    8476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:27:57.336957    8476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:27:57.336957    8476 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:27:57.336957    8476 kubeadm.go:319] [bootstrap-token] Using token: fr9253.a366cb10hxgbs57g
	I1213 10:27:57.338959    8476 out.go:252]   - Configuring RBAC rules ...
	I1213 10:27:57.338959    8476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:27:57.340953    8476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:27:57.341967    8476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:27:57.341967    8476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:27:57.341967    8476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--control-plane 
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:27:57.342958    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:57.342958    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-416400 minikube.k8s.io/updated_at=2025_12_13T10_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kubenet-416400 minikube.k8s.io/primary=true
	I1213 10:27:57.359965    8476 ops.go:34] apiserver oom_adj: -16
	I1213 10:27:57.481312    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.982343    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.481678    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.981222    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.482569    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.981670    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.482737    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.667261    8476 kubeadm.go:1114] duration metric: took 3.3242542s to wait for elevateKubeSystemPrivileges
	I1213 10:28:00.667261    8476 kubeadm.go:403] duration metric: took 19.2689858s to StartCluster
	I1213 10:28:00.667261    8476 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.667261    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:28:00.668362    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.670249    8476 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:28:00.670405    8476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:28:00.670495    8476 addons.go:70] Setting storage-provisioner=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:239] Setting addon storage-provisioner=true in "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:70] Setting default-storageclass=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-416400"
	I1213 10:28:00.670495    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.670495    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:28:00.670296    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:28:00.672621    8476 out.go:179] * Verifying Kubernetes components...
	I1213 10:28:00.680707    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.681870    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:28:00.683512    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.745823    8476 addons.go:239] Setting addon default-storageclass=true in "kubenet-416400"
	I1213 10:28:00.745823    8476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:00.745823    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.747823    8476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:00.747823    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:28:00.751823    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.752838    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.805827    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.806835    8476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:00.806835    8476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:28:00.809826    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.859695    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.877310    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:28:01.093206    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:28:01.096660    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:01.289059    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:01.688169    8476 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:28:01.693138    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:01.748392    8476 node_ready.go:35] waiting up to 15m0s for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.777235    8476 node_ready.go:49] node "kubenet-416400" is "Ready"
	I1213 10:28:01.777235    8476 node_ready.go:38] duration metric: took 28.7755ms for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.778242    8476 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:28:01.782492    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:02.197568    8476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-416400" context rescaled to 1 replicas
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053978s)
	I1213 10:28:02.343589    8476 api_server.go:72] duration metric: took 1.673269s to wait for apiserver process to appear ...
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246374s)
	I1213 10:28:02.343677    8476 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:28:02.343720    8476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55078/healthz ...
	I1213 10:28:02.352594    8476 api_server.go:279] https://127.0.0.1:55078/healthz returned 200:
	ok
	I1213 10:28:02.355060    8476 api_server.go:141] control plane version: v1.34.2
	I1213 10:28:02.355060    8476 api_server.go:131] duration metric: took 11.3397ms to wait for apiserver health ...
	I1213 10:28:02.355060    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:28:02.363052    8476 system_pods.go:59] 8 kube-system pods found
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.363052    8476 system_pods.go:61] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.363052    8476 system_pods.go:74] duration metric: took 7.9926ms to wait for pod list to return data ...
	I1213 10:28:02.363052    8476 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:28:02.363944    8476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:28:02.368689    8476 default_sa.go:45] found service account: "default"
	I1213 10:28:02.368689    8476 default_sa.go:55] duration metric: took 5.6365ms for default service account to be created ...
	I1213 10:28:02.368689    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:28:02.368892    8476 addons.go:530] duration metric: took 1.6984619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:28:02.374322    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.374322    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.374322    8476 retry.go:31] will retry after 257.90094ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.647317    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.647382    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.647496    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.647496    8476 retry.go:31] will retry after 305.033982ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.960601    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.960642    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.960780    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.960803    8476 retry.go:31] will retry after 352.340429ms: missing components: kube-dns, kube-proxy
	I1213 10:28:03.376766    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.376766    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.377765    8476 retry.go:31] will retry after 379.080105ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.767616    8476 system_pods.go:86] 7 kube-system pods found
	I1213 10:28:03.767736    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.767736    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running
	I1213 10:28:03.767836    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.767860    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.767920    8476 system_pods.go:126] duration metric: took 1.399211s to wait for k8s-apps to be running ...
	I1213 10:28:03.767952    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:28:03.772800    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:28:03.793452    8476 system_svc.go:56] duration metric: took 25.5002ms WaitForService to wait for kubelet
	I1213 10:28:03.793452    8476 kubeadm.go:587] duration metric: took 3.1231108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:28:03.793452    8476 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:28:03.799850    8476 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:28:03.799942    8476 node_conditions.go:123] node cpu capacity is 16
	I1213 10:28:03.799942    8476 node_conditions.go:105] duration metric: took 6.4898ms to run NodePressure ...
	I1213 10:28:03.800002    8476 start.go:242] waiting for startup goroutines ...
	I1213 10:28:03.800002    8476 start.go:247] waiting for cluster config update ...
	I1213 10:28:03.800034    8476 start.go:256] writing updated cluster config ...
	I1213 10:28:03.805062    8476 ssh_runner.go:195] Run: rm -f paused
	I1213 10:28:03.812457    8476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:03.818438    8476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:28:05.831273    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:08.330368    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:10.333290    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:12.830400    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:14.830848    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:17.329771    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 
	W1213 10:28:19.831718    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:21.833516    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:24.330384    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:26.331207    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:28.332900    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:30.334351    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:32.835020    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725825301Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725986416Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.725998417Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726003718Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726009218Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726219138Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.726398555Z" level=info msg="Initializing buildkit"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.844000659Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.850793321Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851043146Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851051346Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:22:16 newest-cni-307000 dockerd[924]: time="2025-12-13T10:22:16.851065248Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:22:16 newest-cni-307000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:22:17 newest-cni-307000 cri-dockerd[1219]: time="2025-12-13T10:22:17Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:22:17 newest-cni-307000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:35.526591   20237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:35.527870   20237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:35.529024   20237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:35.532370   20237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:35.534268   20237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347224] CPU: 1 PID: 487650 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f03540a7b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f03540a7af6.
	[  +0.000001] RSP: 002b:00007fff4615c900 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.848535] CPU: 14 PID: 487834 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f24bdd40b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f24bdd40af6.
	[  +0.000001] RSP: 002b:00007ffcef45f750 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.262444] tmpfs: Unknown parameter 'noswap'
	[ +10.454536] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:28:35 up  2:04,  0 user,  load average: 2.59, 3.67, 3.60
	Linux newest-cni-307000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:32 newest-cni-307000 kubelet[20049]: E1213 10:28:32.667743   20049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:32 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:33 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 13 10:28:33 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:33 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:33 newest-cni-307000 kubelet[20066]: E1213 10:28:33.421374   20066 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:33 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:33 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:34 newest-cni-307000 kubelet[20093]: E1213 10:28:34.178466   20093 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:34 newest-cni-307000 kubelet[20122]: E1213 10:28:34.915201   20122 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:28:34 newest-cni-307000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:28:35 newest-cni-307000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 13 10:28:35 newest-cni-307000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:28:35 newest-cni-307000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-307000 -n newest-cni-307000: exit status 2 (595.0773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-307000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (256.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:35:22.037121    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:36:09.158080    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:36:11.493631    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:36:17.855512    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:36:26.278871    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:36:39.201108    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:36:43.307444    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:36:53.937598    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:37:06.907525    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:37:12.561452    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:37:21.644633    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:37:36.733578    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:38:35.639497    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:38:42.413688    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:39:03.007186    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1213 10:39:10.123261    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:18.152583    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53494/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 2 (703.552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-803600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-803600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (0s)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-803600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-803600
helpers_test.go:244: (dbg) docker inspect no-preload-803600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd",
	        "Created": "2025-12-13T10:09:24.921242732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 410406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:19:47.312495248Z",
	            "FinishedAt": "2025-12-13T10:19:43.959791267Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd/3960d9897f634815d03dd1656bf1eefce192285c8758337a5c3e0f8bcfea77fd-json.log",
	        "Name": "/no-preload-803600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-803600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-803600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e-init/diff:/var/lib/docker/overlay2/429aa299c6fcdb1695d08ec7c893c57c033afffcd3ec41fc904bf3236db5abde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/571041f9092b0534048a0b1dac35e9d4a08a2ff2442796fa15a0636437fe7f5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-803600",
	                "Source": "/var/lib/docker/volumes/no-preload-803600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-803600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-803600",
	                "name.minikube.sigs.k8s.io": "no-preload-803600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "202edcc07e78147ef811fd01911ae5ff35d0d9d006f45e69c81f5303ddbf73f3",
	            "SandboxKey": "/var/run/docker/netns/202edcc07e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53491"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-803600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ad4e73e428abf58593ff96b4628f21032a7a4afd7c1c0bb8be8d55b4e2d320fc",
	                    "EndpointID": "5315c65ac1c1a0593e57f42a5908d620f4852bb681cd15a9c6018ed864a9d80f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-803600",
	                        "3960d9897f63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 2 (575.61ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-803600 logs -n 25: (1.9018104s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-416400 sudo iptables -t nat -L -n -v                                 │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status kubelet --all --full --no-pager         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat kubelet --no-pager                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status docker --all --full --no-pager          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat docker --no-pager                          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/docker/daemon.json                              │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo docker system info                                       │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat cri-docker --no-pager                      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cri-dockerd --version                                    │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status containerd --all --full --no-pager      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat containerd --no-pager                      │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /lib/systemd/system/containerd.service               │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo cat /etc/containerd/config.toml                          │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo containerd config dump                                   │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo systemctl status crio --all --full --no-pager            │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ ssh     │ -p kubenet-416400 sudo systemctl cat crio --no-pager                            │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ ssh     │ -p kubenet-416400 sudo crio config                                              │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	│ delete  │ -p kubenet-416400                                                               │ kubenet-416400 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │ 13 Dec 25 10:29 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:27:08
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:27:08.467331    8476 out.go:360] Setting OutFile to fd 1212 ...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.510327    8476 out.go:374] Setting ErrFile to fd 1652...
	I1213 10:27:08.510327    8476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:08.525338    8476 out.go:368] Setting JSON to false
	I1213 10:27:08.528326    8476 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7435,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 10:27:08.529330    8476 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 10:27:08.533334    8476 out.go:179] * [kubenet-416400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 10:27:08.536332    8476 notify.go:221] Checking for updates...
	I1213 10:27:08.538327    8476 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:27:08.541325    8476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:27:08.543338    8476 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 10:27:08.545327    8476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:27:08.547331    8476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:27:08.550333    8476 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "newest-cni-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 config.go:182] Loaded profile config "no-preload-803600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 10:27:08.551337    8476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:27:08.665330    8476 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 10:27:08.669336    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:08.911222    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:08.888781942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:08.914226    8476 out.go:179] * Using the docker driver based on user configuration
	I1213 10:27:08.917218    8476 start.go:309] selected driver: docker
	I1213 10:27:08.917218    8476 start.go:927] validating driver "docker" against <nil>
	I1213 10:27:08.917218    8476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:27:09.005866    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:09.274907    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:09.25177994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:09.275859    8476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:27:09.275859    8476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:27:09.278852    8476 out.go:179] * Using Docker Desktop driver with root privileges
	I1213 10:27:09.281854    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:09.281854    8476 start.go:353] cluster config:
	{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:09.284873    8476 out.go:179] * Starting "kubenet-416400" primary control-plane node in "kubenet-416400" cluster
	I1213 10:27:09.288885    8476 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 10:27:09.290853    8476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:27:09.296882    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:09.296882    8476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:27:09.296882    8476 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 10:27:09.296882    8476 cache.go:65] Caching tarball of preloaded images
	I1213 10:27:09.297854    8476 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1213 10:27:09.297854    8476 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1213 10:27:09.297854    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:09.297854    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json: {Name:mk0f8afb036d1878ac71666ce4d58fd434d1389e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:09.364866    8476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:27:09.364866    8476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:27:09.364866    8476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:27:09.364866    8476 start.go:360] acquireMachinesLock for kubenet-416400: {Name:mk28dcadbda914f3b76421bc1eef202d654b5e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:27:09.365883    8476 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-416400"
	I1213 10:27:09.365883    8476 start.go:93] Provisioning new machine with config: &{Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:27:09.365883    8476 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:27:06.633379    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:06.659612    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:06.687667    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.687737    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:06.691602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:06.721405    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.721405    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:06.725270    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:06.757478    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.757478    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:06.761297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:06.801212    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.801212    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:06.805113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:06.849918    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.849918    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:06.853787    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:06.888435    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.888435    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:06.895174    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:06.930085    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.930085    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:06.933086    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:06.964089    5404 logs.go:282] 0 containers: []
	W1213 10:27:06.964089    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:06.964089    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:06.964089    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:07.052109    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:07.052109    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:07.092822    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:07.092822    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:07.184921    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:07.172596   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.173907   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.175435   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.176746   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:07.177730   15537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:07.184921    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:07.184921    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:07.212614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:07.212614    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:09.772840    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:09.803912    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:09.843377    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.843377    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:09.846881    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:09.876528    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.876528    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:09.879529    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:09.910044    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.910044    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:09.916549    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:09.959417    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.959417    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:09.964602    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:09.999344    5404 logs.go:282] 0 containers: []
	W1213 10:27:09.999344    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:10.002336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:10.032356    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.032356    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:10.036336    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:10.070437    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.070489    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:10.074554    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:10.112271    5404 logs.go:282] 0 containers: []
	W1213 10:27:10.112330    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:10.112330    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:10.112330    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:10.147886    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:10.147886    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:10.243310    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:10.232461   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.233610   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.235121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.236121   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:10.237697   15699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:10.243405    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:10.243405    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:10.272729    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:10.272729    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:10.326215    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:10.326215    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:09.368853    8476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:27:09.369855    8476 start.go:159] libmachine.API.Create for "kubenet-416400" (driver="docker")
	I1213 10:27:09.369855    8476 client.go:173] LocalClient.Create starting
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Decoding PEM data...
	I1213 10:27:09.369855    8476 main.go:143] libmachine: Parsing certificate...
	I1213 10:27:09.375556    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:27:09.428532    8476 cli_runner.go:211] docker network inspect kubenet-416400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:27:09.431540    8476 network_create.go:284] running [docker network inspect kubenet-416400] to gather additional debugging logs...
	I1213 10:27:09.431540    8476 cli_runner.go:164] Run: docker network inspect kubenet-416400
	W1213 10:27:09.477538    8476 cli_runner.go:211] docker network inspect kubenet-416400 returned with exit code 1
	I1213 10:27:09.477538    8476 network_create.go:287] error running [docker network inspect kubenet-416400]: docker network inspect kubenet-416400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-416400 not found
	I1213 10:27:09.477538    8476 network_create.go:289] output of [docker network inspect kubenet-416400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-416400 not found
	
	** /stderr **
	I1213 10:27:09.481534    8476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:27:09.553692    8476 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.568537    8476 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.580557    8476 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4c0f0}
	I1213 10:27:09.581551    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1213 10:27:09.584547    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.637542    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.637542    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.637542    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.67.0/24, will retry: subnet is taken
	I1213 10:27:09.664108    8476 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.678099    8476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001885710}
	I1213 10:27:09.678099    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 10:27:09.682098    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	W1213 10:27:09.738074    8476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400 returned with exit code 1
	W1213 10:27:09.738074    8476 network_create.go:149] failed to create docker network kubenet-416400 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1213 10:27:09.738074    8476 network_create.go:116] failed to create docker network kubenet-416400 192.168.76.0/24, will retry: subnet is taken
	I1213 10:27:09.757990    8476 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 10:27:09.771930    8476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001910480}
	I1213 10:27:09.772001    8476 network_create.go:124] attempt to create docker network kubenet-416400 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:27:09.775120    8476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-416400 kubenet-416400
	I1213 10:27:09.917706    8476 network_create.go:108] docker network kubenet-416400 192.168.85.0/24 created
	I1213 10:27:09.917706    8476 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-416400" container
	I1213 10:27:09.926674    8476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:27:09.990344    8476 cli_runner.go:164] Run: docker volume create kubenet-416400 --label name.minikube.sigs.k8s.io=kubenet-416400 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:27:10.043336    8476 oci.go:103] Successfully created a docker volume kubenet-416400
	I1213 10:27:10.046336    8476 cli_runner.go:164] Run: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:27:11.508914    8476 cli_runner.go:217] Completed: docker run --rm --name kubenet-416400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --entrypoint /usr/bin/test -v kubenet-416400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4625571s)
	I1213 10:27:11.508914    8476 oci.go:107] Successfully prepared a docker volume kubenet-416400
	I1213 10:27:11.508914    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:11.508914    8476 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:27:11.513316    8476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:27:12.902491    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:12.927076    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:12.960518    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.960518    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:12.964255    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:12.994335    5404 logs.go:282] 0 containers: []
	W1213 10:27:12.994335    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:12.998437    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:13.029262    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.029262    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:13.032271    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:13.063264    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.063264    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:13.066261    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:13.100216    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.100278    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:13.103950    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:13.137029    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.137029    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:13.140883    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:13.174413    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.174413    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:13.178202    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:13.207016    5404 logs.go:282] 0 containers: []
	W1213 10:27:13.207016    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:13.207016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:13.207016    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:13.259542    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:13.259542    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:13.332062    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:13.332062    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:13.371879    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:13.371879    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:13.456462    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:13.445517   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.446626   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.447825   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.448792   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:13.450006   15892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:13.456462    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:13.456462    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:15.989415    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:16.012448    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:16.052242    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.052312    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:16.055633    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:16.090683    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.090683    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:16.093931    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:16.133949    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.133949    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:16.138532    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:16.171831    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.171831    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:16.175955    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:16.216817    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.216864    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:16.221712    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:16.258393    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.258393    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:16.261397    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:16.294407    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.294407    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:16.297391    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:16.333410    5404 logs.go:282] 0 containers: []
	W1213 10:27:16.333410    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:16.333410    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:16.333410    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:16.410413    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:16.410413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:16.450393    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:16.450393    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:16.546373    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:16.533035   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.534931   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.537458   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.540395   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:16.542178   16043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:16.546373    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:16.546373    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:16.575806    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:16.575806    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.148785    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:19.175720    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:19.209231    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.209231    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:19.217486    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:19.260811    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.260866    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:19.267265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:19.314924    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.314924    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:19.320918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:19.357550    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.357550    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:19.361556    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:19.392800    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.392800    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:19.397769    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:19.441959    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.441959    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:19.444967    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:19.479965    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.479965    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:19.484482    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:19.525249    5404 logs.go:282] 0 containers: []
	W1213 10:27:19.525314    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:19.525357    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:19.525357    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:19.570778    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:19.570778    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:19.680558    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:19.668248   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.670354   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.672621   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.673972   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:19.675837   16214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:19.680656    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:19.680693    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:19.714060    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:19.714103    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:19.764555    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:19.764555    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.334977    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:22.359551    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:22.400355    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.400355    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:22.404363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:22.438349    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.438349    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:22.442349    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:22.473511    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.473511    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:22.478566    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:22.512393    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.512393    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:22.516409    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:22.550405    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.550405    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:22.553404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:22.584398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.584398    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:22.588395    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:22.615398    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.615398    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:22.618396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:22.649404    5404 logs.go:282] 0 containers: []
	W1213 10:27:22.649404    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:22.649404    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:22.649404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:22.710398    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:22.710398    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:22.751988    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:22.751988    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:22.843768    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:22.835619   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.836770   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.837683   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.838841   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:22.839832   16380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:22.843768    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:22.843768    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:22.871626    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:22.871626    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.434319    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:25.459020    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:25.500957    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.500957    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:25.505654    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:25.533996    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.534053    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:25.538297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:25.569653    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.569653    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:25.573591    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:25.606004    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.606004    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:25.612212    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:25.641756    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.641835    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:25.645703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:25.677304    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.677342    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:25.680988    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:25.712812    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.712812    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:25.716992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:25.748063    5404 logs.go:282] 0 containers: []
	W1213 10:27:25.748063    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:25.748063    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:25.748063    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:25.800759    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:25.800759    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:25.873214    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:25.873214    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:25.914015    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:25.914015    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:26.003163    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:25.989841   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.991273   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.992553   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.995529   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:25.997804   16560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:26.003163    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:26.003163    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:26.833120    8476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-416400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.3195505s)
	I1213 10:27:26.833120    8476 kic.go:203] duration metric: took 15.3239811s to extract preloaded images to volume ...
	I1213 10:27:26.839444    8476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:27:27.097722    8476 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-13 10:27:27.079878659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 10:27:27.101719    8476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:27:27.338932    8476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-416400 --name kubenet-416400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-416400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-416400 --network kubenet-416400 --ip 192.168.85.2 --volume kubenet-416400:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:27:28.058796    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Running}}
	I1213 10:27:28.125687    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.182686    8476 cli_runner.go:164] Run: docker exec kubenet-416400 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:27:28.308932    8476 oci.go:144] the created container "kubenet-416400" has a running status.
	I1213 10:27:28.308932    8476 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:28.438434    8476 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:27:28.537436    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:28.561363    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:28.619392    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.619392    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:28.623396    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:28.669400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.669400    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:28.676410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:28.717401    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.717401    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:28.721393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:28.757400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.757400    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:28.760393    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:28.800402    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.800402    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:28.803398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:28.841400    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.841400    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:28.844399    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:28.878399    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.878399    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:28.882403    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:28.916403    5404 logs.go:282] 0 containers: []
	W1213 10:27:28.916403    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:28.916403    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:28.916403    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:28.992400    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:28.992400    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:29.040404    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:29.040404    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:29.149363    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:29.137915   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.139172   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.141264   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.142415   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:29.144176   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:29.149363    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:29.149363    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:29.183066    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:29.183066    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:28.513430    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:28.575704    8476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:27:28.575704    8476 kic_runner.go:114] Args: [docker exec --privileged kubenet-416400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:27:28.715410    8476 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa...
	I1213 10:27:31.090843    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:27:31.148980    8476 machine.go:94] provisionDockerMachine start ...
	I1213 10:27:31.152618    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.213696    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.227691    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.227691    8476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:27:31.426494    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.426494    8476 ubuntu.go:182] provisioning hostname "kubenet-416400"
	I1213 10:27:31.430633    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.483323    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.484332    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.484332    8476 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-416400 && echo "kubenet-416400" | sudo tee /etc/hostname
	I1213 10:27:31.695552    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-416400
	
	I1213 10:27:31.701394    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:31.759724    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:31.759724    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:31.759724    8476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-416400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-416400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-416400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:27:31.957771    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:27:31.957771    8476 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1213 10:27:31.957771    8476 ubuntu.go:190] setting up certificates
	I1213 10:27:31.957771    8476 provision.go:84] configureAuth start
	I1213 10:27:31.961622    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:32.029795    8476 provision.go:143] copyHostCerts
	I1213 10:27:32.030302    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1213 10:27:32.030343    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1213 10:27:32.030585    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1213 10:27:32.031834    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1213 10:27:32.031890    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1213 10:27:32.032201    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1213 10:27:32.033307    8476 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1213 10:27:32.033341    8476 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1213 10:27:32.033717    8476 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1213 10:27:32.034519    8476 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-416400 san=[127.0.0.1 192.168.85.2 kubenet-416400 localhost minikube]
	I1213 10:27:32.150424    8476 provision.go:177] copyRemoteCerts
	I1213 10:27:32.155416    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:27:32.160422    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.214413    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:32.367375    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:27:32.404881    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 10:27:32.437627    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:27:32.464627    8476 provision.go:87] duration metric: took 506.8482ms to configureAuth
	I1213 10:27:32.464627    8476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:27:32.465634    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:27:32.469262    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.530015    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.530111    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.530111    8476 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1213 10:27:32.727229    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1213 10:27:32.727229    8476 ubuntu.go:71] root file system type: overlay
	I1213 10:27:32.727229    8476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1213 10:27:32.730229    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:32.781835    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:32.782115    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:32.782115    8476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1213 10:27:32.980566    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1213 10:27:32.985113    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:33.047448    8476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:27:33.048094    8476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61066fd00] 0x7ff610672860 <nil>  [] 0s} 127.0.0.1 55079 <nil> <nil>}
	I1213 10:27:33.048138    8476 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1213 10:27:31.746729    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:31.766711    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:31.799712    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.799712    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:31.802714    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:31.848351    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.848351    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:31.852710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:31.893847    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.894377    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:31.897862    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:31.937061    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.937061    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:31.942850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:31.992025    5404 logs.go:282] 0 containers: []
	W1213 10:27:31.992025    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:31.996453    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:32.043414    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.043414    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:32.047410    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:32.082416    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.082416    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:32.086413    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:32.117413    5404 logs.go:282] 0 containers: []
	W1213 10:27:32.117413    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:32.117413    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:32.117413    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:32.184436    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:32.184436    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:32.248252    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:32.248252    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:32.288323    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:32.288323    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:32.395681    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:32.380582   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.381602   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.383843   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.385774   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:32.388153   16906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:32.395681    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:32.395681    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:34.939082    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:34.963857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:35.002856    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.002856    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:35.005854    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:35.038851    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.038851    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:35.041857    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:35.073853    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.073853    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:35.077869    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:35.110852    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.110852    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:35.113850    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:35.152093    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.152093    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:35.156094    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:35.188087    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.188087    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:35.192090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:35.222187    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.222187    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:35.226185    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:35.257190    5404 logs.go:282] 0 containers: []
	W1213 10:27:35.257190    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:35.257190    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:35.257190    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:35.374442    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:35.357763   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.358774   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.360108   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.362218   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:35.363767   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:35.374442    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:35.374442    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:35.414747    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:35.414747    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:35.470732    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:35.470732    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:35.530744    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:35.530744    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:34.752548    8476 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-13 10:27:32.964414860 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1213 10:27:34.752590    8476 machine.go:97] duration metric: took 3.6035571s to provisionDockerMachine
	I1213 10:27:34.752590    8476 client.go:176] duration metric: took 25.382363s to LocalClient.Create
	I1213 10:27:34.752660    8476 start.go:167] duration metric: took 25.3823991s to libmachine.API.Create "kubenet-416400"
	I1213 10:27:34.752660    8476 start.go:293] postStartSetup for "kubenet-416400" (driver="docker")
	I1213 10:27:34.752689    8476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:27:34.757321    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:27:34.760792    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:34.815346    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:34.967363    8476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:27:34.976448    8476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:27:34.976489    8476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:27:34.976523    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1213 10:27:34.976670    8476 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1213 10:27:34.977231    8476 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I1213 10:27:34.981302    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:27:34.993858    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I1213 10:27:35.021854    8476 start.go:296] duration metric: took 269.1608ms for postStartSetup
	I1213 10:27:35.027861    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.080870    8476 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\config.json ...
	I1213 10:27:35.089862    8476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:27:35.093865    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.150107    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.268185    8476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:27:35.276190    8476 start.go:128] duration metric: took 25.9099265s to createHost
	I1213 10:27:35.276190    8476 start.go:83] releasing machines lock for "kubenet-416400", held for 25.9099265s
	I1213 10:27:35.279209    8476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-416400
	I1213 10:27:35.343302    8476 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1213 10:27:35.346842    8476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:27:35.350867    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.352295    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:35.411739    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:27:35.414747    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	W1213 10:27:35.548301    8476 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1213 10:27:35.553481    8476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:35.573784    8476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:27:35.585474    8476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:27:35.589468    8476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:27:35.633416    8476 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:27:35.633416    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:35.633416    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:35.633416    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1213 10:27:35.649009    8476 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1213 10:27:35.649009    8476 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1213 10:27:35.671618    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:27:35.696739    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:27:35.711492    8476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:27:35.715488    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:27:35.732484    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.752096    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:27:35.772619    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:27:35.796702    8476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:27:35.815300    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:27:35.839600    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:27:35.861332    8476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:27:35.884116    8476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:27:35.903094    8476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:27:35.919226    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.090670    8476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:27:36.249395    8476 start.go:496] detecting cgroup driver to use...
	I1213 10:27:36.249395    8476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:27:36.253347    8476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1213 10:27:36.275349    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.297606    8476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:27:36.328195    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:27:36.353573    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:27:36.372805    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:27:36.406354    8476 ssh_runner.go:195] Run: which cri-dockerd
	I1213 10:27:36.417745    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1213 10:27:36.432809    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1213 10:27:36.462872    8476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1213 10:27:36.616454    8476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1213 10:27:36.759020    8476 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1213 10:27:36.759020    8476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1213 10:27:36.784951    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1213 10:27:36.811665    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:36.964769    8476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1213 10:27:37.921141    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:27:37.944144    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1213 10:27:37.967237    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:37.988498    8476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1213 10:27:38.188916    8476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1213 10:27:38.358397    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.521403    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1213 10:27:38.546402    8476 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1213 10:27:38.569221    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:38.730646    8476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1213 10:27:38.878189    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1213 10:27:38.898180    8476 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1213 10:27:38.902189    8476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1213 10:27:38.911194    8476 start.go:564] Will wait 60s for crictl version
	I1213 10:27:38.916189    8476 ssh_runner.go:195] Run: which crictl
	I1213 10:27:38.926186    8476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:27:38.973186    8476 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1213 10:27:38.978795    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:39.038631    8476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1213 10:27:38.092084    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:38.124676    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:38.161924    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.161924    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:38.164928    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:38.198945    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.198945    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:38.201915    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:38.228927    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.228927    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:38.231926    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:38.270851    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.270955    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:38.276558    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:38.313393    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.313393    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:38.316394    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:38.348406    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.348406    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:38.351414    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:38.380397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.380397    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:38.385402    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:38.417397    5404 logs.go:282] 0 containers: []
	W1213 10:27:38.417397    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:38.417397    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:38.417397    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:38.488395    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:38.488395    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:38.526408    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:38.526408    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:38.618667    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:38.608046   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.608871   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.611071   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612089   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:38.612946   17241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:38.618667    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:38.618667    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:38.648614    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:38.649617    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:39.102779    8476 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1213 10:27:39.107988    8476 cli_runner.go:164] Run: docker exec -t kubenet-416400 dig +short host.docker.internal
	I1213 10:27:39.257345    8476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1213 10:27:39.260347    8476 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1213 10:27:39.268341    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.287341    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:27:39.347887    8476 kubeadm.go:884] updating cluster {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:27:39.347887    8476 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 10:27:39.352726    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.403212    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.403212    8476 docker.go:621] Images already preloaded, skipping extraction
	I1213 10:27:39.407208    8476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1213 10:27:39.440282    8476 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1213 10:27:39.440822    8476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:27:39.440822    8476 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 docker true true} ...
	I1213 10:27:39.441138    8476 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-416400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:27:39.446529    8476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1213 10:27:39.559260    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:39.559320    8476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:27:39.559347    8476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-416400 NodeName:kubenet-416400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:27:39.559347    8476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-416400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:27:39.563035    8476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:27:39.576055    8476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:27:39.580043    8476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:27:39.597066    8476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1213 10:27:39.616038    8476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:27:39.638041    8476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:27:39.672042    8476 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:27:39.680043    8476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:27:39.700046    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:27:39.887167    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:27:39.917364    8476 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400 for IP: 192.168.85.2
	I1213 10:27:39.917364    8476 certs.go:195] generating shared ca certs ...
	I1213 10:27:39.917364    8476 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1213 10:27:39.918062    8476 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1213 10:27:39.918062    8476 certs.go:257] generating profile certs ...
	I1213 10:27:39.918912    8476 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key
	I1213 10:27:39.918966    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt with IP's: []
	I1213 10:27:39.969525    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt ...
	I1213 10:27:39.969525    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.crt: {Name:mkded0c3a33573ddb9efde80db53622d23beebc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key ...
	I1213 10:27:39.970523    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\client.key: {Name:mkddb0c680c1cfbc7fb76412dc59f990aa3351fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:39.970523    8476 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6
	I1213 10:27:39.970523    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:27:40.148355    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 ...
	I1213 10:27:40.148355    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6: {Name:mkb638048bd89c15c2729273b91ace1d4490353e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.148703    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 ...
	I1213 10:27:40.148703    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6: {Name:mk4e2e28e87911a65a5741680815685d917d2bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.149871    8476 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt
	I1213 10:27:40.164141    8476 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key.da8001c6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key
	I1213 10:27:40.165495    8476 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key
	I1213 10:27:40.165495    8476 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt with IP's: []
	I1213 10:27:40.389110    8476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt ...
	I1213 10:27:40.389110    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt: {Name:mk9ea56953d9936fd5e08b8dc707cf8c179327b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.390173    8476 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key ...
	I1213 10:27:40.390173    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key: {Name:mk1d05f99191685ca712d4d7978411bd7096c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W1213 10:27:40.404560    8476 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1213 10:27:40.404560    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1213 10:27:40.405551    8476 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I1213 10:27:40.406555    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:27:40.441360    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:27:40.476758    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:27:40.508936    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:27:40.539795    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:27:40.569170    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:27:40.700611    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:27:40.735214    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-416400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:27:40.767361    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I1213 10:27:40.807746    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:27:40.841101    8476 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I1213 10:27:40.876541    8476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:27:40.905929    8476 ssh_runner.go:195] Run: openssl version
	I1213 10:27:40.919422    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.935412    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem
	I1213 10:27:40.958800    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.966774    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:48 /usr/share/ca-certificates/29682.pem
	I1213 10:27:40.970772    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I1213 10:27:41.020692    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.042422    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:27:41.062440    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.083044    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:27:41.101089    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.109913    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.115807    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:27:41.166390    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:27:41.184269    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:27:41.205563    8476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.225153    8476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem
	I1213 10:27:41.244522    8476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.255274    8476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:48 /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.258261    8476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I1213 10:27:41.337148    8476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:27:41.361850    8476 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0
	I1213 10:27:41.386416    8476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:27:41.397702    8476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:27:41.398038    8476 kubeadm.go:401] StartCluster: {Name:kubenet-416400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-416400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:27:41.402376    8476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1213 10:27:41.436826    8476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:27:41.456770    8476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:27:41.472386    8476 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:27:41.476747    8476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:27:41.495422    8476 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:27:41.495422    8476 kubeadm.go:158] found existing configuration files:
	
	I1213 10:27:41.499410    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:27:41.516241    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:27:41.521896    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:27:41.541264    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:27:41.558570    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:27:41.564101    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:27:41.584137    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.604304    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:27:41.610955    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:27:41.630902    8476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:27:41.645473    8476 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:27:41.649275    8476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:27:41.666272    8476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:27:41.782563    8476 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1213 10:27:41.788925    8476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:27:41.907030    8476 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:27:41.206851    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:41.233354    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:41.265257    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.265257    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:41.269906    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:41.306686    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.306741    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:41.310710    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:41.357371    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.357427    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:41.361994    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:41.408206    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.408206    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:41.412215    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:41.440724    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.440761    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:41.444506    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:41.485572    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.485572    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:41.489246    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:41.524191    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.524191    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:41.528287    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:41.561636    5404 logs.go:282] 0 containers: []
	W1213 10:27:41.561708    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:41.561708    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:41.561743    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:41.640633    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:41.640633    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:41.679302    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:41.680274    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:41.769509    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:41.756355   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.757496   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.758621   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.762100   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:41.763629   17409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:41.769509    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:41.769509    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:41.799016    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:41.799067    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:44.369546    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:44.392404    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:44.422173    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.422173    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:44.426709    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:44.462171    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.462253    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:44.466284    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:44.494675    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.494675    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:44.499090    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:44.525551    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.525576    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:44.529460    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:44.557893    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.557944    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:44.561644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:44.592507    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.592507    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:44.598127    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:44.628090    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.628112    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:44.632134    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:44.680973    5404 logs.go:282] 0 containers: []
	W1213 10:27:44.681027    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:44.681074    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:44.681074    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:44.750683    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:44.750683    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:44.791179    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:44.791179    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:44.880384    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:44.868761   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.869600   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.870808   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.872391   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:44.873598   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:44.880415    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:44.880415    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:44.912168    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:44.912168    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.473178    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:47.501052    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:47.534467    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.534540    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:47.538128    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:47.568455    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.568455    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:47.575037    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:47.610628    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.610628    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:47.614588    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:47.650306    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.650306    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:47.655401    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:47.688313    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.688313    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:47.691318    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:47.722314    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.722859    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:47.727885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:47.758032    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.758032    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:47.761680    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:47.793670    5404 logs.go:282] 0 containers: []
	W1213 10:27:47.793670    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:47.793670    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:47.793670    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:47.882682    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:47.871699   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.872599   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.874519   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.875664   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:47.876452   17729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:47.882682    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:47.882682    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:47.916355    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:47.916355    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:47.969201    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:47.969201    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:48.035144    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:48.036141    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.578488    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:50.600943    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:50.631833    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.631833    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:50.635998    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:50.674649    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.674649    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:50.677731    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:50.712195    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.712322    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:50.716398    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:50.750764    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.750764    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:50.754125    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:50.786595    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.786595    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:50.790175    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:50.818734    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.818734    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:50.821737    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:50.854679    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.854679    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:50.859104    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:50.889584    5404 logs.go:282] 0 containers: []
	W1213 10:27:50.889584    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:50.889584    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:50.889584    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:50.947004    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:50.947004    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:50.984338    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:50.984338    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:51.071556    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:51.060341   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.061513   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.063176   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.064640   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:51.065750   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:51.071556    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:51.071556    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:51.102630    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:51.102630    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:53.655677    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:53.682918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:53.715653    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.715653    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:53.718956    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:53.747498    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.747498    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:53.751451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:53.781030    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.781060    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:53.785519    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:53.815077    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.815077    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:53.818373    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:53.851406    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.851432    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:53.855158    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:53.886371    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.886426    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:53.890230    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:53.921595    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.921595    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:53.925821    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:53.958793    5404 logs.go:282] 0 containers: []
	W1213 10:27:53.958867    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:53.958867    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:53.958867    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:54.023643    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:54.023643    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:54.069221    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:54.069221    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:54.158534    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:54.148053   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.149254   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.150659   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.151827   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:54.152932   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:54.158534    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:54.158534    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:54.187711    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:54.187711    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:57.321321    8476 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:27:57.321858    8476 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:27:57.322090    8476 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:27:57.322290    8476 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:27:57.322547    8476 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:27:57.322713    8476 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:27:57.327382    8476 out.go:252]   - Generating certificates and keys ...
	I1213 10:27:57.327382    8476 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:27:57.327991    8476 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:27:57.328219    8476 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-416400 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:27:57.328961    8476 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:57.329956    8476 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:57.329956    8476 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:57.333993    8476 out.go:252]   - Booting up control plane ...
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:57.333993    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:57.334957    8476 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:27:57.334957    8476 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.499474ms
	I1213 10:27:57.334957    8476 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.506067897s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.281282907s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.504426001s
	I1213 10:27:57.335962    8476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:27:57.336957    8476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:27:57.336957    8476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:27:57.336957    8476 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-416400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:27:57.336957    8476 kubeadm.go:319] [bootstrap-token] Using token: fr9253.a366cb10hxgbs57g
	I1213 10:27:57.338959    8476 out.go:252]   - Configuring RBAC rules ...
	I1213 10:27:57.338959    8476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:27:57.339952    8476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:27:57.340953    8476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:27:57.340953    8476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:27:57.340953    8476 kubeadm.go:319] 
	I1213 10:27:57.340953    8476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:27:57.340953    8476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:27:57.341967    8476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:27:57.341967    8476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.341967    8476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:27:57.341967    8476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:27:57.341967    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--control-plane 
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:27:57.342958    8476 kubeadm.go:319] 
	I1213 10:27:57.342958    8476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fr9253.a366cb10hxgbs57g \
	I1213 10:27:57.342958    8476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:4e186cc62273bb1ac6e3884beccb3b1254d51eaaca530d60f3ff3ceb07e5bb75 
	I1213 10:27:57.342958    8476 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1213 10:27:57.342958    8476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.348959    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-416400 minikube.k8s.io/updated_at=2025_12_13T10_27_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=kubenet-416400 minikube.k8s.io/primary=true
	I1213 10:27:57.359965    8476 ops.go:34] apiserver oom_adj: -16
	I1213 10:27:57.481312    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:57.982343    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.481678    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:58.981222    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.482569    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:27:59.981670    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.482737    8476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:28:00.667261    8476 kubeadm.go:1114] duration metric: took 3.3242542s to wait for elevateKubeSystemPrivileges
	I1213 10:28:00.667261    8476 kubeadm.go:403] duration metric: took 19.2689858s to StartCluster
	I1213 10:28:00.667261    8476 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.667261    8476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 10:28:00.668362    8476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:28:00.670249    8476 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1213 10:28:00.670405    8476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:28:00.670495    8476 addons.go:70] Setting storage-provisioner=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:239] Setting addon storage-provisioner=true in "kubenet-416400"
	I1213 10:28:00.670495    8476 addons.go:70] Setting default-storageclass=true in profile "kubenet-416400"
	I1213 10:28:00.670495    8476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-416400"
	I1213 10:28:00.670495    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.670495    8476 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 10:28:00.670296    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:28:00.672621    8476 out.go:179] * Verifying Kubernetes components...
	I1213 10:28:00.680707    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.681870    8476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:28:00.683512    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.745823    8476 addons.go:239] Setting addon default-storageclass=true in "kubenet-416400"
	I1213 10:28:00.745823    8476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:27:56.751844    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:56.777473    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:56.819791    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.819791    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:56.823836    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:56.851634    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.851634    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:56.856515    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:27:56.890733    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.890733    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:27:56.896015    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:27:56.929283    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.929283    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:27:56.933600    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:27:56.965281    5404 logs.go:282] 0 containers: []
	W1213 10:27:56.965380    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:27:56.971621    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:27:57.007594    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.007594    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:27:57.011652    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:27:57.041984    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.041984    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:27:57.047208    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:27:57.080712    5404 logs.go:282] 0 containers: []
	W1213 10:27:57.080712    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:27:57.080712    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:27:57.080712    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:27:57.149704    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:27:57.149704    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:27:57.193071    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:27:57.193071    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:27:57.285994    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:27:57.274215   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.274873   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.277962   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.279748   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:27:57.281147   18228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:27:57.285994    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:27:57.285994    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:27:57.321321    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:27:57.321321    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:27:59.885480    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:27:59.908525    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:27:59.938475    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.938475    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:27:59.942628    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:27:59.971795    5404 logs.go:282] 0 containers: []
	W1213 10:27:59.971795    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:27:59.980520    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:00.013354    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.013413    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:00.017504    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:00.052020    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.052020    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:00.055918    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:00.092456    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.092456    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:00.099457    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:00.132599    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.132599    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:00.136451    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:00.166632    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.166765    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:00.170268    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:00.200588    5404 logs.go:282] 0 containers: []
	W1213 10:28:00.200588    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:00.200588    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:00.200588    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:00.270835    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:00.270835    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:00.309448    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:00.310446    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:00.403831    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:00.393165   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.394233   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.395506   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.396522   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:00.397851   18391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:00.403831    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:00.403831    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:00.431826    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:00.431826    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:00.745823    8476 host.go:66] Checking if "kubenet-416400" exists ...
	I1213 10:28:00.747823    8476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:00.747823    8476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:28:00.751823    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.752838    8476 cli_runner.go:164] Run: docker container inspect kubenet-416400 --format={{.State.Status}}
	I1213 10:28:00.805827    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.806835    8476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:00.806835    8476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:28:00.809826    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:00.859695    8476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55079 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-416400\id_rsa Username:docker}
	I1213 10:28:00.877310    8476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:28:01.093206    8476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:28:01.096660    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:28:01.289059    8476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:28:01.688169    8476 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1213 10:28:01.693138    8476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-416400
	I1213 10:28:01.748392    8476 node_ready.go:35] waiting up to 15m0s for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.777235    8476 node_ready.go:49] node "kubenet-416400" is "Ready"
	I1213 10:28:01.777235    8476 node_ready.go:38] duration metric: took 28.7755ms for node "kubenet-416400" to be "Ready" ...
	I1213 10:28:01.778242    8476 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:28:01.782492    8476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:02.197568    8476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-416400" context rescaled to 1 replicas
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053978s)
	I1213 10:28:02.343589    8476 api_server.go:72] duration metric: took 1.673269s to wait for apiserver process to appear ...
	I1213 10:28:02.343589    8476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246374s)
	I1213 10:28:02.343677    8476 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:28:02.343720    8476 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55078/healthz ...
	I1213 10:28:02.352594    8476 api_server.go:279] https://127.0.0.1:55078/healthz returned 200:
	ok
	I1213 10:28:02.355060    8476 api_server.go:141] control plane version: v1.34.2
	I1213 10:28:02.355060    8476 api_server.go:131] duration metric: took 11.3397ms to wait for apiserver health ...
	I1213 10:28:02.355060    8476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:28:02.363052    8476 system_pods.go:59] 8 kube-system pods found
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.363052    8476 system_pods.go:61] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.363052    8476 system_pods.go:61] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.363052    8476 system_pods.go:61] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.363052    8476 system_pods.go:74] duration metric: took 7.9926ms to wait for pod list to return data ...
	I1213 10:28:02.363052    8476 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:28:02.363944    8476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:28:02.368689    8476 default_sa.go:45] found service account: "default"
	I1213 10:28:02.368689    8476 default_sa.go:55] duration metric: took 5.6365ms for default service account to be created ...
	I1213 10:28:02.368689    8476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:28:02.368892    8476 addons.go:530] duration metric: took 1.6984619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:28:02.374322    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.374322    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.374322    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.374322    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending
	I1213 10:28:02.374322    8476 retry.go:31] will retry after 257.90094ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.647317    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.647382    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.647382    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.647448    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.647496    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.647496    8476 retry.go:31] will retry after 305.033982ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.960601    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:02.960642    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:02.960678    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:02.960728    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:02.960780    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:02.960803    8476 retry.go:31] will retry after 352.340429ms: missing components: kube-dns, kube-proxy
	I1213 10:28:03.376766    8476 system_pods.go:86] 8 kube-system pods found
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "coredns-66bc5c9577-qsf76" [941a59a1-7977-4e35-90e1-5e787611afef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.376766    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:28:03.376766    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.376766    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.377765    8476 retry.go:31] will retry after 379.080105ms: missing components: kube-dns, kube-proxy
	I1213 10:28:02.990203    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:03.012584    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:03.048099    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.049085    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:03.054131    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.090044    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.090114    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:03.094206    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:03.124610    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.124610    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:03.128713    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:03.158624    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.158624    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:03.162039    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:03.197023    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.197023    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:03.201011    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:03.231523    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.231523    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:03.238992    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:03.270780    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.270780    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:03.273777    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:03.307802    5404 logs.go:282] 0 containers: []
	W1213 10:28:03.307802    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:03.307802    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:03.307802    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:03.365023    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:03.365023    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:03.434753    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:03.434753    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:03.474998    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:03.474998    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:03.558479    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:03.548624   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550169   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.550790   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.552338   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:03.553567   18581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:03.558479    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:03.558479    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.093878    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:06.119160    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:06.151920    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.151956    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:06.155686    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:03.767616    8476 system_pods.go:86] 7 kube-system pods found
	I1213 10:28:03.767736    8476 system_pods.go:89] "coredns-66bc5c9577-pzlst" [c0710760-8702-46dd-82cf-57f9c82bfa9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:28:03.767736    8476 system_pods.go:89] "etcd-kubenet-416400" [a3364f15-398a-4392-8334-ba7bb2989af2] Running
	I1213 10:28:03.767836    8476 system_pods.go:89] "kube-apiserver-kubenet-416400" [64e3f0cf-d02f-4c81-854c-634c396b4c0a] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-controller-manager-kubenet-416400" [7af07742-4273-4dcd-8776-4db035d3905b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-proxy-7bdqb" [a4863c98-3861-4d39-b4e6-81ebe763237d] Running
	I1213 10:28:03.767860    8476 system_pods.go:89] "kube-scheduler-kubenet-416400" [df666217-8c7b-4d6d-b9ce-9740af155c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:28:03.767860    8476 system_pods.go:89] "storage-provisioner" [1cae84ec-bfa6-4586-83a9-dfdac48e2707] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:28:03.767920    8476 system_pods.go:126] duration metric: took 1.399211s to wait for k8s-apps to be running ...
	I1213 10:28:03.767952    8476 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:28:03.772800    8476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:28:03.793452    8476 system_svc.go:56] duration metric: took 25.5002ms WaitForService to wait for kubelet
	I1213 10:28:03.793452    8476 kubeadm.go:587] duration metric: took 3.1231108s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:28:03.793452    8476 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:28:03.799850    8476 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1213 10:28:03.799942    8476 node_conditions.go:123] node cpu capacity is 16
	I1213 10:28:03.799942    8476 node_conditions.go:105] duration metric: took 6.4898ms to run NodePressure ...
	I1213 10:28:03.800002    8476 start.go:242] waiting for startup goroutines ...
	I1213 10:28:03.800002    8476 start.go:247] waiting for cluster config update ...
	I1213 10:28:03.800034    8476 start.go:256] writing updated cluster config ...
	I1213 10:28:03.805062    8476 ssh_runner.go:195] Run: rm -f paused
	I1213 10:28:03.812457    8476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:03.818438    8476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:28:05.831273    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:08.330368    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:06.185340    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.185340    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:06.189047    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:06.218663    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.218713    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:06.223022    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:06.251817    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.251817    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:06.256048    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:06.288967    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.289042    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:06.293045    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:06.324404    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.324404    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:06.328470    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:06.359488    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.359488    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:06.363305    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:06.395085    5404 logs.go:282] 0 containers: []
	W1213 10:28:06.395085    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:06.395085    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:06.395085    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:06.460705    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:06.460705    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:06.500531    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:06.500531    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:06.584202    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:06.573119   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.576304   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.577709   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.579122   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:06.580090   18729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:06.584202    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:06.584202    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:06.612936    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:06.612936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:09.171143    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:09.196436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:09.230003    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.230072    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:09.234113    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:09.263594    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.263629    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:09.267574    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:09.295583    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.295671    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:09.300744    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:09.330627    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.330627    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:09.334426    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:09.370279    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.370279    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:09.374820    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:09.404955    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.405033    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:09.410253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:09.441568    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.441568    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:09.445297    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:09.485821    5404 logs.go:282] 0 containers: []
	W1213 10:28:09.485874    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:09.485874    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:09.485936    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:09.548603    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:09.548603    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:09.588521    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:09.588521    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:09.678327    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:09.666892   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.667836   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.670310   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.671394   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:09.672438   18892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:09.678369    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:09.678369    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:09.705500    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:09.705500    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:10.333290    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:12.830400    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:12.262086    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:12.290635    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:12.327110    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.327110    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:12.331105    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:12.360305    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.360305    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:12.367813    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:12.398968    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.399045    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:12.403042    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:12.436089    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.436089    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:12.439942    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:12.471734    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.471734    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:12.475722    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:12.505991    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.506024    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:12.509742    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:12.539425    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.539425    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:12.543823    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:12.573279    5404 logs.go:282] 0 containers: []
	W1213 10:28:12.573344    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:12.573344    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:12.573344    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:12.636807    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:12.636807    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:12.677094    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:12.677094    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:12.762424    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:12.751891   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.752690   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.755186   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756173   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:12.756852   19054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:12.762424    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:12.762424    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:12.790164    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:12.790164    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:15.344891    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:15.368646    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:15.404255    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.404255    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:15.409408    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:15.441938    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.441938    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:15.445068    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:15.475697    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.475697    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:15.479253    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:15.511327    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.511327    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:15.515265    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:15.545395    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.545395    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:15.548941    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:15.579842    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.579918    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:15.584969    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:15.614571    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.614571    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:15.618436    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:15.650365    5404 logs.go:282] 0 containers: []
	W1213 10:28:15.650427    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:15.650427    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:15.650427    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:15.714351    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:15.714351    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:15.752018    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:15.752018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:15.834772    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:15.824883   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826055   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.826571   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829124   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:15.829823   19219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:15.834772    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:15.834772    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:15.866850    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:15.866850    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:28:14.830848    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:17.329771    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:18.423576    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:18.449885    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1213 10:28:18.482529    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.482601    5404 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:28:18.485766    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1213 10:28:18.514138    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.514797    5404 logs.go:284] No container was found matching "etcd"
	I1213 10:28:18.518214    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1213 10:28:18.550542    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.550542    5404 logs.go:284] No container was found matching "coredns"
	I1213 10:28:18.553540    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1213 10:28:18.584106    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.584106    5404 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:28:18.588197    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1213 10:28:18.619945    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.619977    5404 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:28:18.623644    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1213 10:28:18.654453    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.654453    5404 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:28:18.657446    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1213 10:28:18.687250    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.687250    5404 logs.go:284] No container was found matching "kindnet"
	I1213 10:28:18.690703    5404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1213 10:28:18.717150    5404 logs.go:282] 0 containers: []
	W1213 10:28:18.717150    5404 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:28:18.717150    5404 logs.go:123] Gathering logs for container status ...
	I1213 10:28:18.717150    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:28:18.770937    5404 logs.go:123] Gathering logs for kubelet ...
	I1213 10:28:18.770937    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:28:18.835919    5404 logs.go:123] Gathering logs for dmesg ...
	I1213 10:28:18.835919    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:28:18.872319    5404 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:28:18.873326    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:28:18.962288    5404 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:28:18.952563   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.953751   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.955148   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.956811   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:28:18.959348   19396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:28:18.962288    5404 logs.go:123] Gathering logs for Docker ...
	I1213 10:28:18.963246    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1213 10:28:21.496578    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:28:21.522995    5404 out.go:203] 
	W1213 10:28:21.525440    5404 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:28:21.525581    5404 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:28:21.525667    5404 out.go:285] * Related issues:
	W1213 10:28:21.525667    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:28:21.525824    5404 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:28:21.528379    5404 out.go:203] 
	W1213 10:28:19.831718    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:21.833516    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:24.330384    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:26.331207    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:28.332900    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:30.334351    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:32.835020    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:35.333186    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	W1213 10:28:37.333782    8476 pod_ready.go:104] pod "coredns-66bc5c9577-pzlst" is not "Ready", error: <nil>
	I1213 10:28:39.834925    8476 pod_ready.go:94] pod "coredns-66bc5c9577-pzlst" is "Ready"
	I1213 10:28:39.834966    8476 pod_ready.go:86] duration metric: took 36.0154698s for pod "coredns-66bc5c9577-pzlst" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.845165    8476 pod_ready.go:83] waiting for pod "etcd-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.854539    8476 pod_ready.go:94] pod "etcd-kubenet-416400" is "Ready"
	I1213 10:28:39.855541    8476 pod_ready.go:86] duration metric: took 10.3407ms for pod "etcd-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.863535    8476 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.871543    8476 pod_ready.go:94] pod "kube-apiserver-kubenet-416400" is "Ready"
	I1213 10:28:39.871543    8476 pod_ready.go:86] duration metric: took 8.0079ms for pod "kube-apiserver-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:39.874535    8476 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.025973    8476 pod_ready.go:94] pod "kube-controller-manager-kubenet-416400" is "Ready"
	I1213 10:28:40.025973    8476 pod_ready.go:86] duration metric: took 151.4354ms for pod "kube-controller-manager-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.229779    8476 pod_ready.go:83] waiting for pod "kube-proxy-7bdqb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.625654    8476 pod_ready.go:94] pod "kube-proxy-7bdqb" is "Ready"
	I1213 10:28:40.625654    8476 pod_ready.go:86] duration metric: took 395.7533ms for pod "kube-proxy-7bdqb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:40.828010    8476 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:41.225199    8476 pod_ready.go:94] pod "kube-scheduler-kubenet-416400" is "Ready"
	I1213 10:28:41.225199    8476 pod_ready.go:86] duration metric: took 397.0906ms for pod "kube-scheduler-kubenet-416400" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:28:41.225199    8476 pod_ready.go:40] duration metric: took 37.4121912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:28:41.318573    8476 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 10:28:41.321589    8476 out.go:179] * Done! kubectl is now configured to use "kubenet-416400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519842040Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519963651Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519978553Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519984253Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.519989854Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520014956Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.520057560Z" level=info msg="Initializing buildkit"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.639585638Z" level=info msg="Completed buildkit initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645206773Z" level=info msg="Daemon has completed initialization"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645396691Z" level=info msg="API listen on [::]:2376"
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645511202Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 13 10:19:56 no-preload-803600 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 13 10:19:56 no-preload-803600 dockerd[927]: time="2025-12-13T10:19:56.645529304Z" level=info msg="API listen on /run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start docker client with request timeout 0s"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Loaded network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 13 10:19:57 no-preload-803600 cri-dockerd[1223]: time="2025-12-13T10:19:57Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 13 10:19:57 no-preload-803600 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:22.614678   21624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:39:22.615589   21624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:39:22.618685   21624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:39:22.620721   21624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:39:22.621989   21624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347224] CPU: 1 PID: 487650 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f03540a7b20
	[  +0.000039] Code: Unable to access opcode bytes at RIP 0x7f03540a7af6.
	[  +0.000001] RSP: 002b:00007fff4615c900 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.848535] CPU: 14 PID: 487834 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f24bdd40b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f24bdd40af6.
	[  +0.000001] RSP: 002b:00007ffcef45f750 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +9.262444] tmpfs: Unknown parameter 'noswap'
	[ +10.454536] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 10:39:22 up  2:15,  0 user,  load average: 0.36, 0.70, 1.95
	Linux no-preload-803600 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:39:19 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:39:19 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1546.
	Dec 13 10:39:19 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:19 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:19 no-preload-803600 kubelet[21432]: E1213 10:39:19.848098   21432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:39:19 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:39:19 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:39:20 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1547.
	Dec 13 10:39:20 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:20 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:20 no-preload-803600 kubelet[21459]: E1213 10:39:20.616649   21459 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:39:20 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:39:20 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:39:21 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1548.
	Dec 13 10:39:21 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:21 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:21 no-preload-803600 kubelet[21488]: E1213 10:39:21.357956   21488 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:39:21 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:39:21 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:39:22 no-preload-803600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1549.
	Dec 13 10:39:22 no-preload-803600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:22 no-preload-803600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:39:22 no-preload-803600 kubelet[21598]: E1213 10:39:22.098235   21598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:39:22 no-preload-803600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:39:22 no-preload-803600 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-803600 -n no-preload-803600: exit status 2 (576.6306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-803600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (256.43s)

                                                
                                    

Test pass (358/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.14
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.23
9 TestDownloadOnly/v1.28.0/DeleteAll 1.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.72
12 TestDownloadOnly/v1.34.2/json-events 6.91
13 TestDownloadOnly/v1.34.2/preload-exists 0
16 TestDownloadOnly/v1.34.2/kubectl 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.22
18 TestDownloadOnly/v1.34.2/DeleteAll 0.69
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.76
21 TestDownloadOnly/v1.35.0-beta.0/json-events 6.83
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.2
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.93
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.43
29 TestDownloadOnlyKic 1.56
30 TestBinaryMirror 2.53
31 TestOffline 119.49
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
36 TestAddons/Setup 294.34
38 TestAddons/serial/Volcano 50.23
40 TestAddons/serial/GCPAuth/Namespaces 0.24
41 TestAddons/serial/GCPAuth/FakeCredentials 12.15
45 TestAddons/parallel/RegistryCreds 1.26
47 TestAddons/parallel/InspektorGadget 12.14
48 TestAddons/parallel/MetricsServer 7.97
50 TestAddons/parallel/CSI 47.28
51 TestAddons/parallel/Headlamp 38.03
52 TestAddons/parallel/CloudSpanner 6.98
53 TestAddons/parallel/LocalPath 58.27
54 TestAddons/parallel/NvidiaDevicePlugin 5.76
55 TestAddons/parallel/Yakd 12.58
56 TestAddons/parallel/AmdGpuDevicePlugin 7.44
57 TestAddons/StoppedEnableDisable 12.93
58 TestCertOptions 64.07
59 TestCertExpiration 274.03
60 TestDockerFlags 50.5
61 TestForceSystemdFlag 89.57
62 TestForceSystemdEnv 66.15
68 TestErrorSpam/start 2.62
69 TestErrorSpam/status 2.21
70 TestErrorSpam/pause 2.57
71 TestErrorSpam/unpause 2.55
72 TestErrorSpam/stop 19.57
75 TestFunctional/serial/CopySyncFile 0.03
76 TestFunctional/serial/StartWithProxy 86.65
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 46.99
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.26
83 TestFunctional/serial/CacheCmd/cache/add_remote 10.12
84 TestFunctional/serial/CacheCmd/cache/add_local 4.1
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
86 TestFunctional/serial/CacheCmd/cache/list 0.21
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.6
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.41
89 TestFunctional/serial/CacheCmd/cache/delete 0.38
90 TestFunctional/serial/MinikubeKubectlCmd 0.39
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.16
92 TestFunctional/serial/ExtraConfig 43.48
93 TestFunctional/serial/ComponentHealth 0.13
94 TestFunctional/serial/LogsCmd 1.79
95 TestFunctional/serial/LogsFileCmd 1.77
96 TestFunctional/serial/InvalidService 5.76
98 TestFunctional/parallel/ConfigCmd 1.23
100 TestFunctional/parallel/DryRun 1.47
101 TestFunctional/parallel/InternationalLanguage 0.68
102 TestFunctional/parallel/StatusCmd 1.9
107 TestFunctional/parallel/AddonsCmd 0.42
108 TestFunctional/parallel/PersistentVolumeClaim 57.15
110 TestFunctional/parallel/SSHCmd 1.39
111 TestFunctional/parallel/CpCmd 3.91
112 TestFunctional/parallel/MySQL 81.21
113 TestFunctional/parallel/FileSync 0.66
114 TestFunctional/parallel/CertSync 3.54
118 TestFunctional/parallel/NodeLabels 0.14
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
122 TestFunctional/parallel/License 1.68
123 TestFunctional/parallel/DockerEnv/powershell 5.81
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.36
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.5
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.5
131 TestFunctional/parallel/ImageCommands/ImageBuild 5.7
132 TestFunctional/parallel/ImageCommands/Setup 1.76
133 TestFunctional/parallel/Version/short 0.17
134 TestFunctional/parallel/Version/components 1.05
135 TestFunctional/parallel/ServiceCmd/DeployApp 9.33
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.52
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.76
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.53
139 TestFunctional/parallel/ServiceCmd/List 0.96
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.05
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.98
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 54.77
146 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.11
148 TestFunctional/parallel/ImageCommands/ImageRemove 1.4
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.49
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
151 TestFunctional/parallel/ServiceCmd/Format 15.01
152 TestFunctional/parallel/ServiceCmd/URL 15.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
159 TestFunctional/parallel/ProfileCmd/profile_not_create 1.07
160 TestFunctional/parallel/ProfileCmd/profile_list 0.89
161 TestFunctional/parallel/ProfileCmd/profile_json_output 0.93
162 TestFunctional/delete_echo-server_images 0.14
163 TestFunctional/delete_my-image_image 0.06
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 9.86
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 3.79
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.17
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.58
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 4.51
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.35
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.27
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.4
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 1.27
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 1.46
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.73
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.44
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.21
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 3.43
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.55
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 3.24
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.53
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 2.25
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.83
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.8
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.81
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.29
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.3
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.32
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.16
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 1.88
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.45
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.47
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.49
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.57
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.82
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.27
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.84
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.57
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.67
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.91
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.16
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.86
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.14
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.06
260 TestMultiControlPlane/serial/StartCluster 219.35
261 TestMultiControlPlane/serial/DeployApp 9.7
262 TestMultiControlPlane/serial/PingHostFromPods 2.53
263 TestMultiControlPlane/serial/AddWorkerNode 55.69
264 TestMultiControlPlane/serial/NodeLabels 0.16
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.01
266 TestMultiControlPlane/serial/CopyFile 33.67
267 TestMultiControlPlane/serial/StopSecondaryNode 13.53
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.56
269 TestMultiControlPlane/serial/RestartSecondaryNode 50.96
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.97
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 178.23
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.65
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.49
274 TestMultiControlPlane/serial/StopCluster 37.34
275 TestMultiControlPlane/serial/RestartCluster 91.31
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.5
277 TestMultiControlPlane/serial/AddSecondaryNode 79.51
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.96
281 TestImageBuild/serial/Setup 46.84
282 TestImageBuild/serial/NormalBuild 4.43
283 TestImageBuild/serial/BuildWithBuildArg 2.03
284 TestImageBuild/serial/BuildWithDockerIgnore 1.32
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.22
290 TestJSONOutput/start/Command 79.72
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.15
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.95
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.19
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.67
315 TestKicCustomNetwork/create_custom_network 55.43
316 TestKicCustomNetwork/use_default_bridge_network 54.92
317 TestKicExistingNetwork 55.45
318 TestKicCustomSubnet 52.76
319 TestKicStaticIP 54.68
320 TestMainNoArgs 0.16
321 TestMinikubeProfile 101.86
324 TestMountStart/serial/StartWithMountFirst 13.77
325 TestMountStart/serial/VerifyMountFirst 0.57
326 TestMountStart/serial/StartWithMountSecond 13.83
327 TestMountStart/serial/VerifyMountSecond 0.52
328 TestMountStart/serial/DeleteFirst 2.43
329 TestMountStart/serial/VerifyMountPostDelete 0.53
330 TestMountStart/serial/Stop 1.87
331 TestMountStart/serial/RestartStopped 10.87
332 TestMountStart/serial/VerifyMountPostStop 0.55
335 TestMultiNode/serial/FreshStart2Nodes 131.17
336 TestMultiNode/serial/DeployApp2Nodes 7.91
337 TestMultiNode/serial/PingHostFrom2Pods 1.75
338 TestMultiNode/serial/AddNode 54.2
339 TestMultiNode/serial/MultiNodeLabels 0.14
340 TestMultiNode/serial/ProfileList 1.38
341 TestMultiNode/serial/CopyFile 19.25
342 TestMultiNode/serial/StopNode 3.75
343 TestMultiNode/serial/StartAfterStop 13.24
344 TestMultiNode/serial/RestartKeepsNodes 82.04
345 TestMultiNode/serial/DeleteNode 8.39
346 TestMultiNode/serial/StopMultiNode 24.01
347 TestMultiNode/serial/RestartMultiNode 61.56
348 TestMultiNode/serial/ValidateNameConflict 50.3
352 TestPreload 164.71
353 TestScheduledStopWindows 111.66
357 TestInsufficientStorage 27.49
358 TestRunningBinaryUpgrade 372.36
361 TestMissingContainerUpgrade 130.15
363 TestStoppedBinaryUpgrade/Setup 0.98
364 TestNoKubernetes/serial/StartNoK8sWithVersion 0.26
365 TestNoKubernetes/serial/StartWithK8s 89.49
366 TestStoppedBinaryUpgrade/Upgrade 407.31
367 TestNoKubernetes/serial/StartWithStopK8s 22.15
368 TestNoKubernetes/serial/Start 20.6
377 TestPause/serial/Start 86.05
378 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
379 TestNoKubernetes/serial/VerifyK8sNotRunning 0.63
380 TestNoKubernetes/serial/ProfileList 10.28
381 TestNoKubernetes/serial/Stop 2.01
382 TestNoKubernetes/serial/StartNoArgs 10.5
383 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.56
395 TestPause/serial/SecondStartNoReconfiguration 46.27
396 TestPause/serial/Pause 1.03
397 TestPause/serial/VerifyStatus 0.63
398 TestPause/serial/Unpause 0.85
399 TestPause/serial/PauseAgain 1.62
400 TestPause/serial/DeletePaused 9.57
401 TestPause/serial/VerifyDeletedResources 17.69
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
404 TestStartStop/group/old-k8s-version/serial/FirstStart 63.31
407 TestStartStop/group/old-k8s-version/serial/DeployApp 11.85
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.99
409 TestStartStop/group/old-k8s-version/serial/Stop 20.33
411 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.26
412 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.53
413 TestStartStop/group/old-k8s-version/serial/SecondStart 33.68
414 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 23.01
415 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.27
416 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
417 TestStartStop/group/old-k8s-version/serial/Pause 5.13
418 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.6
421 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.48
422 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
423 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.52
424 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.52
425 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
426 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.27
427 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.47
428 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.04
430 TestStartStop/group/embed-certs/serial/FirstStart 78.71
431 TestStartStop/group/embed-certs/serial/DeployApp 8.6
432 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.47
433 TestStartStop/group/embed-certs/serial/Stop 12.18
434 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.52
435 TestStartStop/group/embed-certs/serial/SecondStart 49.11
436 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
437 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.27
438 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.47
439 TestStartStop/group/embed-certs/serial/Pause 5.1
440 TestNetworkPlugins/group/auto/Start 85.46
441 TestNetworkPlugins/group/auto/KubeletFlags 0.57
442 TestNetworkPlugins/group/auto/NetCatPod 15.53
443 TestNetworkPlugins/group/auto/DNS 0.22
444 TestNetworkPlugins/group/auto/Localhost 0.2
445 TestNetworkPlugins/group/auto/HairPin 0.19
446 TestNetworkPlugins/group/kindnet/Start 78.21
449 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
450 TestNetworkPlugins/group/kindnet/KubeletFlags 0.56
451 TestNetworkPlugins/group/kindnet/NetCatPod 14.46
452 TestNetworkPlugins/group/kindnet/DNS 0.23
453 TestNetworkPlugins/group/kindnet/Localhost 0.2
454 TestNetworkPlugins/group/kindnet/HairPin 0.2
455 TestStartStop/group/no-preload/serial/Stop 1.88
456 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.53
458 TestStartStop/group/newest-cni/serial/DeployApp 0
460 TestNetworkPlugins/group/calico/Start 111.69
461 TestNetworkPlugins/group/custom-flannel/Start 71.66
462 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.56
463 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.56
464 TestNetworkPlugins/group/custom-flannel/DNS 0.23
465 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
466 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
467 TestStartStop/group/newest-cni/serial/Stop 1.9
468 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.56
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/calico/KubeletFlags 0.67
472 TestNetworkPlugins/group/calico/NetCatPod 16.63
473 TestNetworkPlugins/group/calico/DNS 0.23
474 TestNetworkPlugins/group/calico/Localhost 0.21
475 TestNetworkPlugins/group/calico/HairPin 0.21
476 TestNetworkPlugins/group/false/Start 85.49
477 TestNetworkPlugins/group/enable-default-cni/Start 84.95
478 TestNetworkPlugins/group/false/KubeletFlags 0.55
479 TestNetworkPlugins/group/false/NetCatPod 15.44
480 TestNetworkPlugins/group/false/DNS 0.24
481 TestNetworkPlugins/group/false/Localhost 0.2
482 TestNetworkPlugins/group/false/HairPin 0.2
483 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.58
484 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.48
485 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
486 TestNetworkPlugins/group/enable-default-cni/Localhost 0.31
487 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
488 TestNetworkPlugins/group/flannel/Start 79.91
489 TestNetworkPlugins/group/bridge/Start 89.65
491 TestNetworkPlugins/group/flannel/ControllerPod 6.01
492 TestNetworkPlugins/group/flannel/KubeletFlags 0.57
493 TestNetworkPlugins/group/flannel/NetCatPod 14.58
494 TestNetworkPlugins/group/flannel/DNS 0.24
495 TestNetworkPlugins/group/flannel/Localhost 0.2
496 TestNetworkPlugins/group/flannel/HairPin 0.2
497 TestNetworkPlugins/group/bridge/KubeletFlags 0.57
498 TestNetworkPlugins/group/bridge/NetCatPod 14.65
499 TestNetworkPlugins/group/bridge/DNS 0.24
500 TestNetworkPlugins/group/bridge/Localhost 0.2
501 TestNetworkPlugins/group/kubenet/Start 93.01
502 TestNetworkPlugins/group/bridge/HairPin 0.2
503 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
504 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
505 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.49
507 TestNetworkPlugins/group/kubenet/KubeletFlags 0.56
508 TestNetworkPlugins/group/kubenet/NetCatPod 16.48
509 TestNetworkPlugins/group/kubenet/DNS 0.22
510 TestNetworkPlugins/group/kubenet/Localhost 0.62
511 TestNetworkPlugins/group/kubenet/HairPin 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-056000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-056000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (8.1371079s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 08:29:26.223267    2968 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1213 08:29:26.266714    2968 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-056000
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-056000: exit status 85 (230.9947ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-056000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-056000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:18
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:18.156323    6680 out.go:360] Setting OutFile to fd 696 ...
	I1213 08:29:18.198324    6680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:18.198324    6680 out.go:374] Setting ErrFile to fd 700...
	I1213 08:29:18.198324    6680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1213 08:29:18.208314    6680 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1213 08:29:18.215321    6680 out.go:368] Setting JSON to true
	I1213 08:29:18.218317    6680 start.go:133] hostinfo: {"hostname":"minikube4","uptime":365,"bootTime":1765614192,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:29:18.218317    6680 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:29:18.225315    6680 out.go:99] [download-only-056000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:29:18.225315    6680 notify.go:221] Checking for updates...
	W1213 08:29:18.225315    6680 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1213 08:29:18.227318    6680 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:29:18.230328    6680 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:29:18.232321    6680 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:18.234320    6680 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1213 08:29:18.238319    6680 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:18.239317    6680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:18.362907    6680 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:29:18.366445    6680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:19.070098    6680 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:19.049233094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:19.080314    6680 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:19.080314    6680 start.go:309] selected driver: docker
	I1213 08:29:19.080314    6680 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:19.087309    6680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:19.323413    6680 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:19.3022143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Ex
pected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescrip
tion:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Program
Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:19.323758    6680 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:19.375435    6680 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1213 08:29:19.376168    6680 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:19.379019    6680 out.go:171] Using Docker Desktop driver with root privileges
	I1213 08:29:19.380725    6680 cni.go:84] Creating CNI manager for ""
	I1213 08:29:19.380725    6680 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:29:19.381262    6680 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:19.381441    6680 start.go:353] cluster config:
	{Name:download-only-056000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:19.384130    6680 out.go:99] Starting "download-only-056000" primary control-plane node in "download-only-056000" cluster
	I1213 08:29:19.384130    6680 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:29:19.385946    6680 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:29:19.386946    6680 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 08:29:19.386946    6680 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:29:19.441677    6680 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:19.441677    6680 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:19.441677    6680 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:19.441677    6680 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:29:19.442686    6680 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:19.446210    6680 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1213 08:29:19.446250    6680 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:19.446612    6680 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1213 08:29:19.448688    6680 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 08:29:19.448715    6680 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1213 08:29:19.578577    6680 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1213 08:29:19.579182    6680 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-056000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-056000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.150759s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-056000
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (6.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-335300 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-335300 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker: (6.9142454s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (6.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 08:29:35.290947    2968 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1213 08:29:35.291155    2968 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
--- PASS: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-335300
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-335300: exit status 85 (211.4382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-056000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-056000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-056000                                                                                                                           │ download-only-056000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-335300 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker │ download-only-335300 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:28
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:28.449099    3016 out.go:360] Setting OutFile to fd 820 ...
	I1213 08:29:28.494095    3016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:28.494095    3016 out.go:374] Setting ErrFile to fd 824...
	I1213 08:29:28.494095    3016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:28.507409    3016 out.go:368] Setting JSON to true
	I1213 08:29:28.510849    3016 start.go:133] hostinfo: {"hostname":"minikube4","uptime":375,"bootTime":1765614192,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:29:28.510849    3016 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:29:28.554185    3016 out.go:99] [download-only-335300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:29:28.555118    3016 notify.go:221] Checking for updates...
	I1213 08:29:28.557264    3016 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:29:28.559440    3016 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:29:28.561727    3016 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:28.563527    3016 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1213 08:29:28.567206    3016 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:28.567624    3016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:28.681228    3016 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:29:28.684499    3016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:28.915604    3016 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:28.899424329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:28.918803    3016 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:28.918803    3016 start.go:309] selected driver: docker
	I1213 08:29:28.918803    3016 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:28.925984    3016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:29.168818    3016 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:29.149916667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:29.169462    3016 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:29.204635    3016 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1213 08:29:29.205638    3016 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:29.207632    3016 out.go:171] Using Docker Desktop driver with root privileges
	I1213 08:29:29.209633    3016 cni.go:84] Creating CNI manager for ""
	I1213 08:29:29.209633    3016 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:29:29.209633    3016 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:29.209633    3016 start.go:353] cluster config:
	{Name:download-only-335300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-335300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:29.212633    3016 out.go:99] Starting "download-only-335300" primary control-plane node in "download-only-335300" cluster
	I1213 08:29:29.212633    3016 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:29:29.213634    3016 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:29:29.213634    3016 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 08:29:29.214639    3016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:29:29.267909    3016 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:29.267909    3016 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:29.267909    3016 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:29.267909    3016 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:29:29.267909    3016 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 08:29:29.267909    3016 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 08:29:29.267909    3016 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 08:29:29.276382    3016 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1213 08:29:29.276448    3016 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:29.276764    3016 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1213 08:29:29.279058    3016 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 08:29:29.279058    3016 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1213 08:29:29.402097    3016 preload.go:295] Got checksum from GCS API "cafa99c47d4d00983a02f051962239e0"
	I1213 08:29:29.402318    3016 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4?checksum=md5:cafa99c47d4d00983a02f051962239e0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-335300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-335300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-335300
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (6.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-523900 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-523900 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker: (6.8311454s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (6.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 08:29:43.796998    2968 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1213 08:29:43.796998    2968 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-523900
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-523900: exit status 85 (198.8081ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                           │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-056000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker        │ download-only-056000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-056000                                                                                                                                  │ download-only-056000 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-335300 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker        │ download-only-335300 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-335300                                                                                                                                  │ download-only-335300 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-523900 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker │ download-only-523900 │ minikube4\jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:37.037271    8296 out.go:360] Setting OutFile to fd 872 ...
	I1213 08:29:37.081030    8296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:37.081030    8296 out.go:374] Setting ErrFile to fd 876...
	I1213 08:29:37.081030    8296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:37.095152    8296 out.go:368] Setting JSON to true
	I1213 08:29:37.098507    8296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":384,"bootTime":1765614192,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:29:37.098507    8296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:29:37.103516    8296 out.go:99] [download-only-523900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:29:37.103516    8296 notify.go:221] Checking for updates...
	I1213 08:29:37.106763    8296 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:29:37.108314    8296 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:29:37.110878    8296 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:37.112919    8296 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1213 08:29:37.117264    8296 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:37.117862    8296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:37.228236    8296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:29:37.231788    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:37.463909    8296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:37.442060494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:37.469548    8296 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:37.469616    8296 start.go:309] selected driver: docker
	I1213 08:29:37.469636    8296 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:37.475398    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:37.729403    8296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-13 08:29:37.710971181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:29:37.729403    8296 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:37.764018    8296 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1213 08:29:37.764702    8296 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:37.998503    8296 out.go:171] Using Docker Desktop driver with root privileges
	I1213 08:29:38.002292    8296 cni.go:84] Creating CNI manager for ""
	I1213 08:29:38.002415    8296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1213 08:29:38.002446    8296 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:38.002601    8296 start.go:353] cluster config:
	{Name:download-only-523900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-523900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:38.004522    8296 out.go:99] Starting "download-only-523900" primary control-plane node in "download-only-523900" cluster
	I1213 08:29:38.004522    8296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1213 08:29:38.007601    8296 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:29:38.007601    8296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:29:38.007601    8296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:29:38.065725    8296 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:38.065725    8296 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:38.066736    8296 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1213 08:29:38.066736    8296 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:29:38.066736    8296 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 08:29:38.066736    8296 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 08:29:38.066736    8296 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 08:29:38.073202    8296 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1213 08:29:38.073305    8296 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:38.073460    8296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1213 08:29:38.076282    8296 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1213 08:29:38.076282    8296 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1213 08:29:38.210484    8296 preload.go:295] Got checksum from GCS API "7f0e1a4aaa3540d32279d04bf9728fae"
	I1213 08:29:38.211070    8296 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:7f0e1a4aaa3540d32279d04bf9728fae -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-523900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-523900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-523900
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-142900 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-142900 --alsologtostderr --driver=docker: (1.0487962s)
helpers_test.go:176: Cleaning up "download-docker-142900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-142900
--- PASS: TestDownloadOnlyKic (1.56s)

                                                
                                    
x
+
TestBinaryMirror (2.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 08:29:48.483939    2968 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-607200 --alsologtostderr --binary-mirror http://127.0.0.1:62556 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-607200 --alsologtostderr --binary-mirror http://127.0.0.1:62556 --driver=docker: (1.7979823s)
helpers_test.go:176: Cleaning up "binary-mirror-607200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-607200
--- PASS: TestBinaryMirror (2.53s)

                                                
                                    
x
+
TestOffline (119.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-313900 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-313900 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (1m55.3184492s)
helpers_test.go:176: Cleaning up "offline-docker-313900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-313900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-313900: (4.1711073s)
--- PASS: TestOffline (119.49s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-612900
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-612900: exit status 85 (277.7381ms)

                                                
                                                
-- stdout --
	* Profile "addons-612900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-612900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-612900
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-612900: exit status 85 (275.0539ms)

                                                
                                                
-- stdout --
	* Profile "addons-612900" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-612900"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (294.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-612900 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-612900 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m54.3431748s)
--- PASS: TestAddons/Setup (294.34s)

                                                
                                    
x
+
TestAddons/serial/Volcano (50.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 18.0023ms
addons_test.go:886: volcano-controller stabilized in 18.0023ms
addons_test.go:870: volcano-scheduler stabilized in 18.5464ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-l42w9" [59c345d2-e8f5-426d-8e68-c30c2c869cc1] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0063311s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-2vr4t" [647f2445-9404-49a2-a787-50d16f0cf419] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0084913s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-5xvkl" [2dd24d08-cc15-46ab-ba37-5170419b76ea] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0059932s
addons_test.go:905: (dbg) Run:  kubectl --context addons-612900 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-612900 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-612900 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [a656ac20-5955-48c6-9ac6-ee0c4bd5228c] Pending
helpers_test.go:353: "test-job-nginx-0" [a656ac20-5955-48c6-9ac6-ee0c4bd5228c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [a656ac20-5955-48c6-9ac6-ee0c4bd5228c] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 20.0085444s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable volcano --alsologtostderr -v=1: (12.4928164s)
--- PASS: TestAddons/serial/Volcano (50.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-612900 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-612900 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-612900 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-612900 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a3c57882-42fb-4ab5-883e-f61ff95895b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a3c57882-42fb-4ab5-883e-f61ff95895b7] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.0066019s
addons_test.go:696: (dbg) Run:  kubectl --context addons-612900 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-612900 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-612900 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-612900 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.15s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.26s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 7.2222ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-612900
addons_test.go:334: (dbg) Run:  kubectl --context addons-612900 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-bnhrz" [efcecb86-bf7a-4103-a993-1e9ce53a5d31] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0300402s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable inspektor-gadget --alsologtostderr -v=1: (6.1085359s)
--- PASS: TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 9.0445ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-g7hh5" [fcdc772a-83bc-4b71-89ba-89ac7a9d1289] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0067538s
addons_test.go:465: (dbg) Run:  kubectl --context addons-612900 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable metrics-server --alsologtostderr -v=1: (1.8245628s)
--- PASS: TestAddons/parallel/MetricsServer (7.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 08:36:34.867051    2968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:36:34.882555    2968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:36:34.882555    2968 kapi.go:107] duration metric: took 15.5033ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 15.5033ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-612900 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-612900 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [f87822ee-e4be-4cd5-b54b-366d6fc1f250] Pending
helpers_test.go:353: "task-pv-pod" [f87822ee-e4be-4cd5-b54b-366d6fc1f250] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [f87822ee-e4be-4cd5-b54b-366d6fc1f250] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0073283s
addons_test.go:574: (dbg) Run:  kubectl --context addons-612900 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-612900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-612900 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-612900 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-612900 delete pod task-pv-pod: (1.0888272s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-612900 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-612900 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-612900 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [6091a4e4-d6ce-43c5-9f21-07eb689bdbb4] Pending
helpers_test.go:353: "task-pv-pod-restore" [6091a4e4-d6ce-43c5-9f21-07eb689bdbb4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [6091a4e4-d6ce-43c5-9f21-07eb689bdbb4] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0060309s
addons_test.go:616: (dbg) Run:  kubectl --context addons-612900 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-612900 delete pod task-pv-pod-restore: (1.1821411s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-612900 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-612900 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable volumesnapshots --alsologtostderr -v=1: (1.2843806s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.3964632s)
--- PASS: TestAddons/parallel/CSI (47.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (38.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-612900 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-612900 --alsologtostderr -v=1: (1.5610067s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-fs65c" [b79b8b56-ffab-4fcf-8320-27350fa33fbd] Pending
helpers_test.go:353: "headlamp-dfcdc64b-fs65c" [b79b8b56-ffab-4fcf-8320-27350fa33fbd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-fs65c" [b79b8b56-ffab-4fcf-8320-27350fa33fbd] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 30.0203929s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable headlamp --alsologtostderr -v=1: (6.4461439s)
--- PASS: TestAddons/parallel/Headlamp (38.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-x9xhb" [67dacbb1-5a7e-46ac-a71a-b248d25de282] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0058917s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.98s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-612900 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-612900 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [8f1f6ebb-6a6c-4eed-968b-7e05c7917e7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [8f1f6ebb-6a6c-4eed-968b-7e05c7917e7c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [8f1f6ebb-6a6c-4eed-968b-7e05c7917e7c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0057958s
addons_test.go:969: (dbg) Run:  kubectl --context addons-612900 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 ssh "cat /opt/local-path-provisioner/pvc-da9b552b-4efc-447e-a070-8a57fcc1bd05_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-612900 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-612900 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.2653767s)
--- PASS: TestAddons/parallel/LocalPath (58.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-clmpz" [5eface52-848e-421d-ba36-306668f56c94] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0080399s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-z6td6" [19a5572a-6e24-4ee7-a5ed-7e4784ec3dec] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0056516s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable yakd --alsologtostderr -v=1: (6.5716405s)
--- PASS: TestAddons/parallel/Yakd (12.58s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.44s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-9bnj4" [2c55d287-c0be-4e9d-82e4-c0be0753f0bf] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.007207s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.4324757s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-612900
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-612900: (12.0940073s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-612900
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-612900
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-612900
--- PASS: TestAddons/StoppedEnableDisable (12.93s)

                                                
                                    
x
+
TestCertOptions (64.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-628500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-628500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (58.6901238s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-628500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1213 10:09:14.703499    2968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-628500
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-628500 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-628500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-628500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-628500: (4.1348396s)
--- PASS: TestCertOptions (64.07s)

                                                
                                    
x
+
TestCertExpiration (274.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-980800 --memory=3072 --cert-expiration=3m --driver=docker
E1213 10:05:22.011525    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-980800 --memory=3072 --cert-expiration=3m --driver=docker: (54.5542707s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-980800 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-980800 --memory=3072 --cert-expiration=8760h --driver=docker: (33.8775247s)
helpers_test.go:176: Cleaning up "cert-expiration-980800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-980800
E1213 10:09:46.049042    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-980800: (5.5978158s)
--- PASS: TestCertExpiration (274.03s)

                                                
                                    
x
+
TestDockerFlags (50.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-622000 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-622000 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (45.510041s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-622000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-622000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-622000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-622000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-622000: (3.8041415s)
--- PASS: TestDockerFlags (50.50s)

                                                
                                    
x
+
TestForceSystemdFlag (89.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-313900 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-313900 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m22.8979385s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-313900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-313900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-313900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-313900: (5.7533709s)
--- PASS: TestForceSystemdFlag (89.57s)

                                                
                                    
x
+
TestForceSystemdEnv (66.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-419400 --memory=3072 --alsologtostderr -v=5 --driver=docker
E1213 10:07:36.706852    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-419400 --memory=3072 --alsologtostderr -v=5 --driver=docker: (55.6085853s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-419400 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-419400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-419400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-419400: (9.9177407s)
--- PASS: TestForceSystemdEnv (66.15s)

                                                
                                    
x
+
TestErrorSpam/start (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 start --dry-run
--- PASS: TestErrorSpam/start (2.62s)

                                                
                                    
x
+
TestErrorSpam/status (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 status
--- PASS: TestErrorSpam/status (2.21s)

                                                
                                    
x
+
TestErrorSpam/pause (2.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 pause: (1.1024133s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 pause
--- PASS: TestErrorSpam/pause (2.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 unpause
--- PASS: TestErrorSpam/unpause (2.55s)

                                                
                                    
x
+
TestErrorSpam/stop (19.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop: (11.9512168s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop: (3.6750163s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-107300 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-107300 stop: (3.9368234s)
--- PASS: TestErrorSpam/stop (19.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1213 08:39:45.998241    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.004567    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.016064    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.037792    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.079347    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.161141    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.323469    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:46.645241    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:47.287074    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:48.568519    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:51.130367    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:39:56.252618    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:40:06.494298    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:40:26.976867    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-213400 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m26.6410876s)
--- PASS: TestFunctional/serial/StartWithProxy (86.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 08:40:32.150061    2968 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --alsologtostderr -v=8
E1213 08:41:07.940347    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-213400 --alsologtostderr -v=8: (46.9864933s)
functional_test.go:678: soft start took 46.9874242s for "functional-213400" cluster.
I1213 08:41:19.137663    2968 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (46.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-213400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:3.1: (3.8689444s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:3.3: (3.096649s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 cache add registry.k8s.io/pause:latest: (3.1497528s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-213400 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3644597613\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-213400 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3644597613\001: (1.2662747s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache add minikube-local-cache-test:functional-213400
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 cache add minikube-local-cache-test:functional-213400: (2.5735399s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache delete minikube-local-cache-test:functional-213400
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-213400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (561.2651ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 cache reload: (2.6810838s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 kubectl -- --context functional-213400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-213400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-213400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.4774324s)
functional_test.go:776: restart took 43.4774324s for "functional-213400" cluster.
I1213 08:42:25.511127    2968 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (43.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-213400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 logs: (1.7876017s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3008823247\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3008823247\001\logs.txt: (1.7564649s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-213400 apply -f testdata\invalidsvc.yaml
E1213 08:42:29.862702    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-213400
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-213400: exit status 115 (1.0823199s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32483 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-213400 delete -f testdata\invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-213400 delete -f testdata\invalidsvc.yaml: (1.294693s)
--- PASS: TestFunctional/serial/InvalidService (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 config get cpus: exit status 14 (184.0146ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 config get cpus: exit status 14 (171.9962ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (603.0669ms)

                                                
                                                
-- stdout --
	* [functional-213400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:43:46.866832    8448 out.go:360] Setting OutFile to fd 1704 ...
	I1213 08:43:46.909821    8448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:46.909821    8448 out.go:374] Setting ErrFile to fd 1648...
	I1213 08:43:46.909821    8448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:46.925066    8448 out.go:368] Setting JSON to false
	I1213 08:43:46.927447    8448 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1234,"bootTime":1765614192,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:43:46.927447    8448 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:43:46.930703    8448 out.go:179] * [functional-213400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:43:46.932426    8448 notify.go:221] Checking for updates...
	I1213 08:43:46.936169    8448 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:43:46.939787    8448 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:43:46.942600    8448 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:43:46.946381    8448 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:43:46.950046    8448 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:43:46.953432    8448 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:43:46.954428    8448 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:43:47.067241    8448 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:43:47.070258    8448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:43:47.306242    8448 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:83 SystemTime:2025-12-13 08:43:47.288674939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:43:47.309241    8448 out.go:179] * Using the docker driver based on existing profile
	I1213 08:43:47.311245    8448 start.go:309] selected driver: docker
	I1213 08:43:47.311245    8448 start.go:927] validating driver "docker" against &{Name:functional-213400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-213400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:43:47.311245    8448 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:43:47.350252    8448 out.go:203] 
	W1213 08:43:47.354243    8448 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:43:47.357248    8448 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-213400 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (678.4118ms)

                                                
                                                
-- stdout --
	* [functional-213400] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:43:43.434819   13324 out.go:360] Setting OutFile to fd 860 ...
	I1213 08:43:43.483830   13324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:43.483830   13324 out.go:374] Setting ErrFile to fd 1824...
	I1213 08:43:43.483830   13324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:43.498830   13324 out.go:368] Setting JSON to false
	I1213 08:43:43.502253   13324 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1230,"bootTime":1765614192,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 08:43:43.502253   13324 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 08:43:43.506726   13324 out.go:179] * [functional-213400] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 08:43:43.512305   13324 notify.go:221] Checking for updates...
	I1213 08:43:43.516181   13324 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 08:43:43.524186   13324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:43:43.529448   13324 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 08:43:43.533844   13324 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:43:43.540844   13324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:43:43.544840   13324 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 08:43:43.545843   13324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:43:43.665840   13324 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 08:43:43.669844   13324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:43:43.925799   13324 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:83 SystemTime:2025-12-13 08:43:43.907328038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 08:43:43.929808   13324 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 08:43:43.933810   13324 start.go:309] selected driver: docker
	I1213 08:43:43.933810   13324 start.go:927] validating driver "docker" against &{Name:functional-213400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-213400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:43:43.933810   13324 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:43:43.989155   13324 out.go:203] 
	W1213 08:43:43.993147   13324 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:43:43.997129   13324 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [632321bd-9b90-4527-900a-674b00b32131] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0060858s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-213400 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-213400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-213400 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-213400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [621c3a6d-e236-4246-a8b2-301927122214] Pending
helpers_test.go:353: "sp-pod" [621c3a6d-e236-4246-a8b2-301927122214] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [621c3a6d-e236-4246-a8b2-301927122214] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 42.0061834s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-213400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-213400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-213400 delete -f testdata/storage-provisioner/pod.yaml: (1.5022864s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-213400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [efe5c8f5-dc87-4355-9d2a-63a29b2a66eb] Pending
helpers_test.go:353: "sp-pod" [efe5c8f5-dc87-4355-9d2a-63a29b2a66eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [efe5c8f5-dc87-4355-9d2a-63a29b2a66eb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0074782s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-213400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh -n functional-213400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cp functional-213400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3332897752\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh -n functional-213400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh -n functional-213400 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (81.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-213400 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-g4tdx" [e83eca52-ab88-4602-8bf0-08e8212f4ce7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-g4tdx" [e83eca52-ab88-4602-8bf0-08e8212f4ce7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 58.0055824s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (218.8129ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:37.748041    2968 retry.go:31] will retry after 1.146949333s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (197.0105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:39.095725    2968 retry.go:31] will retry after 921.148394ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (209.9368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:40.231390    2968 retry.go:31] will retry after 2.768546111s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (209.5459ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:43.216541    2968 retry.go:31] will retry after 4.271235525s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (266.2696ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:47.758636    2968 retry.go:31] will retry after 5.716694996s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;": exit status 1 (218.2969ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:43:53.697853    2968 retry.go:31] will retry after 6.464230664s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-213400 exec mysql-6bcdcbc558-g4tdx -- mysql -ppassword -e "show databases;"
E1213 08:44:45.997794    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:45:13.706168    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (81.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2968/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /etc/test/nested/copy/2968/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2968.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /etc/ssl/certs/2968.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2968.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /usr/share/ca-certificates/2968.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/29682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /etc/ssl/certs/29682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/29682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /usr/share/ca-certificates/29682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-213400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 ssh "sudo systemctl is-active crio": exit status 1 (590.8496ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.6675906s)
--- PASS: TestFunctional/parallel/License (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-213400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-213400"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-213400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-213400": (3.3300952s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-213400 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-213400 docker-env | Invoke-Expression ; docker images": (2.4735588s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-213400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-213400
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-213400
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-213400 image ls --format short --alsologtostderr:
I1213 08:43:52.053566    8424 out.go:360] Setting OutFile to fd 1596 ...
I1213 08:43:52.100339    8424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:52.100339    8424 out.go:374] Setting ErrFile to fd 1760...
I1213 08:43:52.101001    8424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:52.112817    8424 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:52.112817    8424 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:52.121002    8424 cli_runner.go:164] Run: docker container inspect functional-213400 --format={{.State.Status}}
I1213 08:43:52.188904    8424 ssh_runner.go:195] Run: systemctl --version
I1213 08:43:52.191900    8424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-213400
I1213 08:43:52.244906    8424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63432 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-213400\id_rsa Username:docker}
I1213 08:43:52.377296    8424 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-213400 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-213400 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ docker.io/library/minikube-local-cache-test │ functional-213400 │ bdbeacb5b0cf7 │ 30B    │
│ public.ecr.aws/docker/library/mysql         │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-213400 image ls --format table --alsologtostderr:
I1213 08:43:53.502581     960 out.go:360] Setting OutFile to fd 1444 ...
I1213 08:43:53.546933     960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.546933     960 out.go:374] Setting ErrFile to fd 1516...
I1213 08:43:53.546933     960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.558927     960 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.558927     960 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.565936     960 cli_runner.go:164] Run: docker container inspect functional-213400 --format={{.State.Status}}
I1213 08:43:53.634570     960 ssh_runner.go:195] Run: systemctl --version
I1213 08:43:53.637673     960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-213400
I1213 08:43:53.698694     960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63432 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-213400\id_rsa Username:docker}
I1213 08:43:53.855903     960 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-213400 image ls --format json --alsologtostderr:
[{"id":"bdbeacb5b0cf71d2b6aa2677e9a91f59ca519b02fe765645bb7ea493f793589d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-213400"],"size":"30"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af12
09fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-213400","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-213400 image ls --format json --alsologtostderr:
I1213 08:43:53.023809    8448 out.go:360] Setting OutFile to fd 1552 ...
I1213 08:43:53.069029    8448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.069029    8448 out.go:374] Setting ErrFile to fd 1884...
I1213 08:43:53.069029    8448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.081054    8448 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.081400    8448 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.089601    8448 cli_runner.go:164] Run: docker container inspect functional-213400 --format={{.State.Status}}
I1213 08:43:53.157061    8448 ssh_runner.go:195] Run: systemctl --version
I1213 08:43:53.163064    8448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-213400
I1213 08:43:53.221971    8448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63432 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-213400\id_rsa Username:docker}
I1213 08:43:53.347530    8448 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-213400 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: bdbeacb5b0cf71d2b6aa2677e9a91f59ca519b02fe765645bb7ea493f793589d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-213400
size: "30"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-213400
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-213400 image ls --format yaml --alsologtostderr:
I1213 08:43:52.527799   13140 out.go:360] Setting OutFile to fd 2020 ...
I1213 08:43:52.582399   13140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:52.582399   13140 out.go:374] Setting ErrFile to fd 1992...
I1213 08:43:52.582399   13140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:52.594396   13140 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:52.594396   13140 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:52.601387   13140 cli_runner.go:164] Run: docker container inspect functional-213400 --format={{.State.Status}}
I1213 08:43:52.660807   13140 ssh_runner.go:195] Run: systemctl --version
I1213 08:43:52.663959   13140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-213400
I1213 08:43:52.721414   13140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63432 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-213400\id_rsa Username:docker}
I1213 08:43:52.867935   13140 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 ssh pgrep buildkitd: exit status 1 (572.2129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr: (4.6957289s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-213400 image build -t localhost/my-image:functional-213400 testdata\build --alsologtostderr:
I1213 08:43:53.198980    5832 out.go:360] Setting OutFile to fd 2020 ...
I1213 08:43:53.261749    5832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.261793    5832 out.go:374] Setting ErrFile to fd 1992...
I1213 08:43:53.261832    5832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:43:53.275737    5832 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.297735    5832 config.go:182] Loaded profile config "functional-213400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1213 08:43:53.304957    5832 cli_runner.go:164] Run: docker container inspect functional-213400 --format={{.State.Status}}
I1213 08:43:53.365654    5832 ssh_runner.go:195] Run: systemctl --version
I1213 08:43:53.369082    5832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-213400
I1213 08:43:53.426480    5832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63432 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-213400\id_rsa Username:docker}
I1213 08:43:53.568941    5832 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1049284435.tar
I1213 08:43:53.574938    5832 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:43:53.598098    5832 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1049284435.tar
I1213 08:43:53.607760    5832 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1049284435.tar: stat -c "%s %y" /var/lib/minikube/build/build.1049284435.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1049284435.tar': No such file or directory
I1213 08:43:53.607760    5832 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1049284435.tar --> /var/lib/minikube/build/build.1049284435.tar (3072 bytes)
I1213 08:43:53.652664    5832 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1049284435
I1213 08:43:53.670668    5832 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1049284435 -xf /var/lib/minikube/build/build.1049284435.tar
I1213 08:43:53.686105    5832 docker.go:361] Building image: /var/lib/minikube/build/build.1049284435
I1213 08:43:53.692907    5832 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-213400 /var/lib/minikube/build/build.1049284435
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:0f575bb20306ba75fae27dead21b79b5f4b1652cba5090565405379e1aa4a56a
#8 writing image sha256:0f575bb20306ba75fae27dead21b79b5f4b1652cba5090565405379e1aa4a56a done
#8 naming to localhost/my-image:functional-213400 0.0s done
#8 DONE 0.2s
I1213 08:43:57.737287    5832 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-213400 /var/lib/minikube/build/build.1049284435: (4.0443614s)
I1213 08:43:57.741619    5832 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1049284435
I1213 08:43:57.759216    5832 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1049284435.tar
I1213 08:43:57.773203    5832 build_images.go:218] Built localhost/my-image:functional-213400 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1049284435.tar
I1213 08:43:57.773203    5832 build_images.go:134] succeeded building to: functional-213400
I1213 08:43:57.773203    5832 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.6717969s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-213400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 version -o=json --components: (1.0502214s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-213400 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-213400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-rvhwd" [259c6140-35ac-4ef6-8164-6b4b4f0f5070] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-rvhwd" [259c6140-35ac-4ef6-8164-6b4b4f0f5070] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.0086133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr: (3.0508705s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr: (3.21186s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-213400
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 image load --daemon kicbase/echo-server:functional-213400 --alsologtostderr: (3.9021844s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 8792: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 service list -o json
functional_test.go:1504: Took "985.1746ms" to run "out/minikube-windows-amd64.exe -p functional-213400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (54.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-213400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [f302802a-79d2-47f1-ac23-a24acff60427] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [f302802a-79d2-47f1-ac23-a24acff60427] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 54.0047143s
I1213 08:43:41.892551    2968 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (54.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 service --namespace=default --https --url hello-node: exit status 1 (15.0094911s)

                                                
                                                
-- stdout --
	https://127.0.0.1:63687

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:63687
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image save kicbase/echo-server:functional-213400 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-windows-amd64.exe -p functional-213400 image save kicbase/echo-server:functional-213400 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.1116821s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image rm kicbase/echo-server:functional-213400 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-213400
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 image save --daemon kicbase/echo-server:functional-213400 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-213400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 service hello-node --url --format={{.IP}}: exit status 1 (15.0116624s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-213400 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-213400 service hello-node --url: exit status 1 (15.0097116s)

                                                
                                                
-- stdout --
	http://127.0.0.1:63713

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:63713
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-213400 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-213400 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 2808: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 7884: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "721.8339ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "165.2187ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "771.893ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "161.4434ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-213400
--- PASS: TestFunctional/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-213400
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-213400
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:3.1: (3.5629766s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:3.3: (3.0912988s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 cache add registry.k8s.io/pause:latest: (3.2038928s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-482100 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach4252804748\001
E1213 09:04:46.005967    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache add minikube-local-cache-test:functional-482100
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 cache add minikube-local-cache-test:functional-482100: (2.5585999s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache delete minikube-local-cache-test:functional-482100
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (574.8624ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 cache reload: (2.7910205s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs: (1.2649319s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3934132998\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3934132998\001\logs.txt: (1.3941663s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 config get cpus: exit status 14 (168.0011ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 config get cpus: exit status 14 (183.1705ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (597.9768ms)

                                                
                                                
-- stdout --
	* [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:20:50.812237    9284 out.go:360] Setting OutFile to fd 1076 ...
	I1213 09:20:50.854994    9284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:50.855020    9284 out.go:374] Setting ErrFile to fd 2044...
	I1213 09:20:50.855057    9284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:50.868294    9284 out.go:368] Setting JSON to false
	I1213 09:20:50.870960    9284 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3458,"bootTime":1765614192,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:50.871085    9284 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:50.874463    9284 out.go:179] * [functional-482100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:50.887546    9284 notify.go:221] Checking for updates...
	I1213 09:20:50.889662    9284 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:50.891743    9284 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:50.894329    9284 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:50.896922    9284 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:50.899182    9284 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:50.901714    9284 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:50.902821    9284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:51.019723    9284 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:51.022720    9284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:51.252963    9284 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:51.235945498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:51.256958    9284 out.go:179] * Using the docker driver based on existing profile
	I1213 09:20:51.258958    9284 start.go:309] selected driver: docker
	I1213 09:20:51.258958    9284 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:51.258958    9284 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:51.296096    9284 out.go:203] 
	W1213 09:20:51.298325    9284 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 09:20:51.300572    9284 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-482100 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (726.9738ms)

                                                
                                                
-- stdout --
	* [functional-482100] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:20:50.089894   13140 out.go:360] Setting OutFile to fd 1144 ...
	I1213 09:20:50.132405   13140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:50.132405   13140 out.go:374] Setting ErrFile to fd 1880...
	I1213 09:20:50.132405   13140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:20:50.145627   13140 out.go:368] Setting JSON to false
	I1213 09:20:50.147627   13140 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3457,"bootTime":1765614192,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1213 09:20:50.147627   13140 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1213 09:20:50.151667   13140 out.go:179] * [functional-482100] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1213 09:20:50.155820   13140 notify.go:221] Checking for updates...
	I1213 09:20:50.156575   13140 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1213 09:20:50.158731   13140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:20:50.162795   13140 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1213 09:20:50.164494   13140 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:20:50.167317   13140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:20:50.170016   13140 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1213 09:20:50.171101   13140 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:20:50.370846   13140 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1213 09:20:50.377346   13140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:20:50.606458   13140 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-13 09:20:50.587245701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1213 09:20:50.609559   13140 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 09:20:50.612658   13140 start.go:309] selected driver: docker
	I1213 09:20:50.612658   13140 start.go:927] validating driver "docker" against &{Name:functional-482100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-482100 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:20:50.612658   13140 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:20:50.695686   13140 out.go:203] 
	W1213 09:20:50.697515   13140 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:20:50.699536   13140 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh -n functional-482100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cp functional-482100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp315632686\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh -n functional-482100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh -n functional-482100 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2968/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /etc/test/nested/copy/2968/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2968.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /etc/ssl/certs/2968.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2968.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /usr/share/ca-certificates/2968.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/29682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /etc/ssl/certs/29682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/29682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /usr/share/ca-certificates/29682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 ssh "sudo systemctl is-active crio": exit status 1 (530.0915ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.2359938s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-482100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "642.2554ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "157.1576ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "646.0562ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "159.8684ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 version -o=json --components: (1.8752304s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-482100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-482100
docker.io/kicbase/echo-server:functional-482100
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-482100 image ls --format short --alsologtostderr:
I1213 09:22:28.485019    9368 out.go:360] Setting OutFile to fd 916 ...
I1213 09:22:28.536736    9368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:28.536736    9368 out.go:374] Setting ErrFile to fd 1104...
I1213 09:22:28.536736    9368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:28.548406    9368 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:28.549042    9368 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:28.556005    9368 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:22:28.620804    9368 ssh_runner.go:195] Run: systemctl --version
I1213 09:22:28.626110    9368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:22:28.687300    9368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
I1213 09:22:28.843575    9368 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-482100 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-482100 │ bdbeacb5b0cf7 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-482100 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-482100 image ls --format table --alsologtostderr:
I1213 09:22:30.288584   10312 out.go:360] Setting OutFile to fd 1876 ...
I1213 09:22:30.334379   10312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:30.334379   10312 out.go:374] Setting ErrFile to fd 1588...
I1213 09:22:30.334379   10312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:30.347329   10312 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:30.348329   10312 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:30.355736   10312 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:22:30.416543   10312 ssh_runner.go:195] Run: systemctl --version
I1213 09:22:30.419080   10312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:22:30.474219   10312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
I1213 09:22:30.611622   10312 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-482100 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"bdbeacb5b0cf71d2b6aa2677e9a91f59ca519b02fe765645bb7ea493f793589d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-482100"],"size":"30"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffa
dd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-482100"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9
da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-482100 image ls --format json --alsologtostderr:
I1213 09:22:29.824044    3728 out.go:360] Setting OutFile to fd 1872 ...
I1213 09:22:29.871959    3728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:29.871959    3728 out.go:374] Setting ErrFile to fd 1884...
I1213 09:22:29.871959    3728 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:29.887238    3728 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:29.887571    3728 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:29.896239    3728 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:22:29.960876    3728 ssh_runner.go:195] Run: systemctl --version
I1213 09:22:29.963875    3728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:22:30.021412    3728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
I1213 09:22:30.152593    3728 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-482100 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-482100
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: bdbeacb5b0cf71d2b6aa2677e9a91f59ca519b02fe765645bb7ea493f793589d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-482100
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-482100 image ls --format yaml --alsologtostderr:
I1213 09:22:28.989181   10916 out.go:360] Setting OutFile to fd 944 ...
I1213 09:22:29.036353   10916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:29.036353   10916 out.go:374] Setting ErrFile to fd 1516...
I1213 09:22:29.036410   10916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:29.048750   10916 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:29.049202   10916 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:29.058114   10916 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:22:29.112881   10916 ssh_runner.go:195] Run: systemctl --version
I1213 09:22:29.116876   10916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:22:29.176686   10916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
I1213 09:22:29.320922   10916 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-482100 ssh pgrep buildkitd: exit status 1 (546.0593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image build -t localhost/my-image:functional-482100 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 image build -t localhost/my-image:functional-482100 testdata\build --alsologtostderr: (4.5576844s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-482100 image build -t localhost/my-image:functional-482100 testdata\build --alsologtostderr:
I1213 09:22:30.014795    7884 out.go:360] Setting OutFile to fd 1696 ...
I1213 09:22:30.063933    7884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:30.063933    7884 out.go:374] Setting ErrFile to fd 1760...
I1213 09:22:30.063933    7884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:22:30.075633    7884 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:30.078631    7884 config.go:182] Loaded profile config "functional-482100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1213 09:22:30.085051    7884 cli_runner.go:164] Run: docker container inspect functional-482100 --format={{.State.Status}}
I1213 09:22:30.146939    7884 ssh_runner.go:195] Run: systemctl --version
I1213 09:22:30.150925    7884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-482100
I1213 09:22:30.209577    7884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63841 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-482100\id_rsa Username:docker}
I1213 09:22:30.329212    7884 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3020459676.tar
I1213 09:22:30.335693    7884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 09:22:30.358706    7884 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3020459676.tar
I1213 09:22:30.370203    7884 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3020459676.tar: stat -c "%s %y" /var/lib/minikube/build/build.3020459676.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3020459676.tar': No such file or directory
I1213 09:22:30.370337    7884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3020459676.tar --> /var/lib/minikube/build/build.3020459676.tar (3072 bytes)
I1213 09:22:30.407632    7884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3020459676
I1213 09:22:30.430035    7884 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3020459676 -xf /var/lib/minikube/build/build.3020459676.tar
I1213 09:22:30.448531    7884 docker.go:361] Building image: /var/lib/minikube/build/build.3020459676
I1213 09:22:30.452096    7884 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-482100 /var/lib/minikube/build/build.3020459676
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:7edddcd3fe325886e741604d56f2190e17a614968c312daf0ff8cf126c21b1db
#8 writing image sha256:7edddcd3fe325886e741604d56f2190e17a614968c312daf0ff8cf126c21b1db done
#8 naming to localhost/my-image:functional-482100 0.0s done
#8 DONE 0.2s
I1213 09:22:34.436460    7884 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-482100 /var/lib/minikube/build/build.3020459676: (3.9837296s)
I1213 09:22:34.440590    7884 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3020459676
I1213 09:22:34.457596    7884 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3020459676.tar
I1213 09:22:34.473418    7884 build_images.go:218] Built localhost/my-image:functional-482100 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3020459676.tar
I1213 09:22:34.473418    7884 build_images.go:134] succeeded building to: functional-482100
I1213 09:22:34.473418    7884 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
E1213 09:22:36.674707    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr: (2.8123445s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr: (2.3969713s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-482100
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-482100 image load --daemon kicbase/echo-server:functional-482100 --alsologtostderr: (2.3895152s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image save kicbase/echo-server:functional-482100 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image rm kicbase/echo-server:functional-482100 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-482100
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-482100 image save --daemon kicbase/echo-server:functional-482100 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-482100
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1213 09:24:46.016308    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:21.983015    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:21.989967    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.001680    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.023911    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.065395    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.148115    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.309985    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:22.631564    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:23.274432    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:24.555792    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:27.118598    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:32.240484    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:39.753265    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:25:42.482780    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:02.965600    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:26:43.928264    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:36.678033    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:28:05.851318    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m37.7626481s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (1.5896704s)
--- PASS: TestMultiControlPlane/serial/StartCluster (219.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 kubectl -- rollout status deployment/busybox: (4.5078662s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-8jlcg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-hndv6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-z5sbs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-8jlcg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-hndv6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-z5sbs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-8jlcg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-hndv6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-z5sbs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-8jlcg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-8jlcg -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-hndv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-hndv6 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-z5sbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 kubectl -- exec busybox-7b57f96db7-z5sbs -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 node add --alsologtostderr -v 5: (53.8005205s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (1.8841368s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-935300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0118316s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (33.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --output json --alsologtostderr -v 5: (1.9196531s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp testdata\cp-test.txt ha-935300:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2402776448\001\cp-test_ha-935300.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300:/home/docker/cp-test.txt ha-935300-m02:/home/docker/cp-test_ha-935300_ha-935300-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test_ha-935300_ha-935300-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300:/home/docker/cp-test.txt ha-935300-m03:/home/docker/cp-test_ha-935300_ha-935300-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test_ha-935300_ha-935300-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300:/home/docker/cp-test.txt ha-935300-m04:/home/docker/cp-test_ha-935300_ha-935300-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test_ha-935300_ha-935300-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp testdata\cp-test.txt ha-935300-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test.txt"
E1213 09:29:29.095303    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2402776448\001\cp-test_ha-935300-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m02:/home/docker/cp-test.txt ha-935300:/home/docker/cp-test_ha-935300-m02_ha-935300.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test_ha-935300-m02_ha-935300.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m02:/home/docker/cp-test.txt ha-935300-m03:/home/docker/cp-test_ha-935300-m02_ha-935300-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test_ha-935300-m02_ha-935300-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m02:/home/docker/cp-test.txt ha-935300-m04:/home/docker/cp-test_ha-935300-m02_ha-935300-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test_ha-935300-m02_ha-935300-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp testdata\cp-test.txt ha-935300-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2402776448\001\cp-test_ha-935300-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m03:/home/docker/cp-test.txt ha-935300:/home/docker/cp-test_ha-935300-m03_ha-935300.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test_ha-935300-m03_ha-935300.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m03:/home/docker/cp-test.txt ha-935300-m02:/home/docker/cp-test_ha-935300-m03_ha-935300-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test_ha-935300-m03_ha-935300-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m03:/home/docker/cp-test.txt ha-935300-m04:/home/docker/cp-test_ha-935300-m03_ha-935300-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test_ha-935300-m03_ha-935300-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp testdata\cp-test.txt ha-935300-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2402776448\001\cp-test_ha-935300-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test.txt"
E1213 09:29:46.019289    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m04:/home/docker/cp-test.txt ha-935300:/home/docker/cp-test_ha-935300-m04_ha-935300.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300 "sudo cat /home/docker/cp-test_ha-935300-m04_ha-935300.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m04:/home/docker/cp-test.txt ha-935300-m02:/home/docker/cp-test_ha-935300-m04_ha-935300-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m02 "sudo cat /home/docker/cp-test_ha-935300-m04_ha-935300-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 cp ha-935300-m04:/home/docker/cp-test.txt ha-935300-m03:/home/docker/cp-test_ha-935300-m04_ha-935300-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 ssh -n ha-935300-m03 "sudo cat /home/docker/cp-test_ha-935300-m04_ha-935300-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (33.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 node stop m02 --alsologtostderr -v 5: (12.0057231s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: exit status 7 (1.5227185s)

                                                
                                                
-- stdout --
	ha-935300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-935300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-935300-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-935300-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:30:04.268746   13928 out.go:360] Setting OutFile to fd 2012 ...
	I1213 09:30:04.310746   13928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:30:04.310746   13928 out.go:374] Setting ErrFile to fd 1572...
	I1213 09:30:04.310746   13928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:30:04.321969   13928 out.go:368] Setting JSON to false
	I1213 09:30:04.321969   13928 mustload.go:66] Loading cluster: ha-935300
	I1213 09:30:04.321969   13928 notify.go:221] Checking for updates...
	I1213 09:30:04.324497   13928 config.go:182] Loaded profile config "ha-935300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:30:04.324614   13928 status.go:174] checking status of ha-935300 ...
	I1213 09:30:04.332788   13928 cli_runner.go:164] Run: docker container inspect ha-935300 --format={{.State.Status}}
	I1213 09:30:04.388270   13928 status.go:371] ha-935300 host status = "Running" (err=<nil>)
	I1213 09:30:04.388294   13928 host.go:66] Checking if "ha-935300" exists ...
	I1213 09:30:04.392117   13928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-935300
	I1213 09:30:04.448814   13928 host.go:66] Checking if "ha-935300" exists ...
	I1213 09:30:04.453582   13928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:30:04.457211   13928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-935300
	I1213 09:30:04.509372   13928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-935300\id_rsa Username:docker}
	I1213 09:30:04.638925   13928 ssh_runner.go:195] Run: systemctl --version
	I1213 09:30:04.659355   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:30:04.682040   13928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-935300
	I1213 09:30:04.740146   13928 kubeconfig.go:125] found "ha-935300" server: "https://127.0.0.1:49352"
	I1213 09:30:04.740146   13928 api_server.go:166] Checking apiserver status ...
	I1213 09:30:04.744145   13928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:30:04.768887   13928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2325/cgroup
	I1213 09:30:04.784876   13928 api_server.go:182] apiserver freezer: "7:freezer:/docker/b3e96ac90c72e07ebbb244ac0aedaaa3f71d34159650802b181c6edf7fc1e71e/kubepods/burstable/pod91cdd1de7241eb3b9cd3e089de2d7f5f/6b7e566a7d98bc2c3d27dc4454a085a01400709ea252a3bf793021691fbb66b9"
	I1213 09:30:04.788804   13928 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b3e96ac90c72e07ebbb244ac0aedaaa3f71d34159650802b181c6edf7fc1e71e/kubepods/burstable/pod91cdd1de7241eb3b9cd3e089de2d7f5f/6b7e566a7d98bc2c3d27dc4454a085a01400709ea252a3bf793021691fbb66b9/freezer.state
	I1213 09:30:04.803011   13928 api_server.go:204] freezer state: "THAWED"
	I1213 09:30:04.803053   13928 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:49352/healthz ...
	I1213 09:30:04.813856   13928 api_server.go:279] https://127.0.0.1:49352/healthz returned 200:
	ok
	I1213 09:30:04.813856   13928 status.go:463] ha-935300 apiserver status = Running (err=<nil>)
	I1213 09:30:04.813856   13928 status.go:176] ha-935300 status: &{Name:ha-935300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:30:04.813856   13928 status.go:174] checking status of ha-935300-m02 ...
	I1213 09:30:04.821152   13928 cli_runner.go:164] Run: docker container inspect ha-935300-m02 --format={{.State.Status}}
	I1213 09:30:04.875660   13928 status.go:371] ha-935300-m02 host status = "Stopped" (err=<nil>)
	I1213 09:30:04.875660   13928 status.go:384] host is not running, skipping remaining checks
	I1213 09:30:04.875660   13928 status.go:176] ha-935300-m02 status: &{Name:ha-935300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:30:04.875660   13928 status.go:174] checking status of ha-935300-m03 ...
	I1213 09:30:04.883566   13928 cli_runner.go:164] Run: docker container inspect ha-935300-m03 --format={{.State.Status}}
	I1213 09:30:04.936886   13928 status.go:371] ha-935300-m03 host status = "Running" (err=<nil>)
	I1213 09:30:04.937403   13928 host.go:66] Checking if "ha-935300-m03" exists ...
	I1213 09:30:04.941045   13928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-935300-m03
	I1213 09:30:04.997492   13928 host.go:66] Checking if "ha-935300-m03" exists ...
	I1213 09:30:05.002058   13928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:30:05.005651   13928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-935300-m03
	I1213 09:30:05.062168   13928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49473 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-935300-m03\id_rsa Username:docker}
	I1213 09:30:05.206938   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:30:05.231590   13928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-935300
	I1213 09:30:05.289104   13928 kubeconfig.go:125] found "ha-935300" server: "https://127.0.0.1:49352"
	I1213 09:30:05.289662   13928 api_server.go:166] Checking apiserver status ...
	I1213 09:30:05.294722   13928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:30:05.321483   13928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2176/cgroup
	I1213 09:30:05.336487   13928 api_server.go:182] apiserver freezer: "7:freezer:/docker/b17962272c8bbabcf55c31a8165618a7f62d82a14633c661d90e144a847543e6/kubepods/burstable/pod2b08b5d6fdb8abb38f13c1eb13c256d6/c6d9cc01046248a3512011920fcb180669ad28288f166741bb252cc3cd77db3b"
	I1213 09:30:05.340478   13928 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b17962272c8bbabcf55c31a8165618a7f62d82a14633c661d90e144a847543e6/kubepods/burstable/pod2b08b5d6fdb8abb38f13c1eb13c256d6/c6d9cc01046248a3512011920fcb180669ad28288f166741bb252cc3cd77db3b/freezer.state
	I1213 09:30:05.352478   13928 api_server.go:204] freezer state: "THAWED"
	I1213 09:30:05.352478   13928 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:49352/healthz ...
	I1213 09:30:05.361281   13928 api_server.go:279] https://127.0.0.1:49352/healthz returned 200:
	ok
	I1213 09:30:05.361281   13928 status.go:463] ha-935300-m03 apiserver status = Running (err=<nil>)
	I1213 09:30:05.361281   13928 status.go:176] ha-935300-m03 status: &{Name:ha-935300-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:30:05.361281   13928 status.go:174] checking status of ha-935300-m04 ...
	I1213 09:30:05.368344   13928 cli_runner.go:164] Run: docker container inspect ha-935300-m04 --format={{.State.Status}}
	I1213 09:30:05.426508   13928 status.go:371] ha-935300-m04 host status = "Running" (err=<nil>)
	I1213 09:30:05.426508   13928 host.go:66] Checking if "ha-935300-m04" exists ...
	I1213 09:30:05.430153   13928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-935300-m04
	I1213 09:30:05.486070   13928 host.go:66] Checking if "ha-935300-m04" exists ...
	I1213 09:30:05.490066   13928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:30:05.494073   13928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-935300-m04
	I1213 09:30:05.543070   13928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49602 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-935300-m04\id_rsa Username:docker}
	I1213 09:30:05.676030   13928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:30:05.694391   13928 status.go:176] ha-935300-m04 status: &{Name:ha-935300-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5552135s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node start m02 --alsologtostderr -v 5
E1213 09:30:21.985894    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:30:49.695038    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 node start m02 --alsologtostderr -v 5: (48.6306988s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (2.1924229s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9680084s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (178.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 stop --alsologtostderr -v 5: (39.0506164s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 start --wait true --alsologtostderr -v 5
E1213 09:32:36.681360    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 start --wait true --alsologtostderr -v 5: (2m18.855838s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (178.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 node delete m03 --alsologtostderr -v 5: (12.7663399s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (1.4989308s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4887443s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 stop --alsologtostderr -v 5
E1213 09:34:46.023309    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 stop --alsologtostderr -v 5: (37.0066674s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: exit status 7 (330.0404ms)

                                                
                                                
-- stdout --
	ha-935300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-935300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-935300-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:34:51.658993    4592 out.go:360] Setting OutFile to fd 1076 ...
	I1213 09:34:51.703053    4592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:51.703053    4592 out.go:374] Setting ErrFile to fd 1120...
	I1213 09:34:51.703053    4592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:34:51.712612    4592 out.go:368] Setting JSON to false
	I1213 09:34:51.712612    4592 mustload.go:66] Loading cluster: ha-935300
	I1213 09:34:51.712612    4592 notify.go:221] Checking for updates...
	I1213 09:34:51.713642    4592 config.go:182] Loaded profile config "ha-935300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:34:51.713642    4592 status.go:174] checking status of ha-935300 ...
	I1213 09:34:51.721813    4592 cli_runner.go:164] Run: docker container inspect ha-935300 --format={{.State.Status}}
	I1213 09:34:51.776720    4592 status.go:371] ha-935300 host status = "Stopped" (err=<nil>)
	I1213 09:34:51.776720    4592 status.go:384] host is not running, skipping remaining checks
	I1213 09:34:51.776720    4592 status.go:176] ha-935300 status: &{Name:ha-935300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:34:51.776720    4592 status.go:174] checking status of ha-935300-m02 ...
	I1213 09:34:51.786181    4592 cli_runner.go:164] Run: docker container inspect ha-935300-m02 --format={{.State.Status}}
	I1213 09:34:51.837927    4592 status.go:371] ha-935300-m02 host status = "Stopped" (err=<nil>)
	I1213 09:34:51.837927    4592 status.go:384] host is not running, skipping remaining checks
	I1213 09:34:51.837927    4592 status.go:176] ha-935300-m02 status: &{Name:ha-935300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:34:51.837927    4592 status.go:174] checking status of ha-935300-m04 ...
	I1213 09:34:51.847107    4592 cli_runner.go:164] Run: docker container inspect ha-935300-m04 --format={{.State.Status}}
	I1213 09:34:51.896101    4592 status.go:371] ha-935300-m04 host status = "Stopped" (err=<nil>)
	I1213 09:34:51.896101    4592 status.go:384] host is not running, skipping remaining checks
	I1213 09:34:51.896101    4592 status.go:176] ha-935300-m04 status: &{Name:ha-935300-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 start --wait true --alsologtostderr -v 5 --driver=docker
E1213 09:35:21.990325    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 start --wait true --alsologtostderr -v 5 --driver=docker: (1m29.5050056s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (1.481128s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5040138s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 node add --control-plane --alsologtostderr -v 5
E1213 09:37:36.684551    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 node add --control-plane --alsologtostderr -v 5: (1m17.5618674s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-935300 status --alsologtostderr -v 5: (1.9490301s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9630982s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.96s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (46.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-673300 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-673300 --driver=docker: (46.8429869s)
--- PASS: TestImageBuild/serial/Setup (46.84s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-673300
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-673300: (4.4320021s)
--- PASS: TestImageBuild/serial/NormalBuild (4.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-673300
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-673300: (2.0311844s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-673300
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-673300: (1.3214979s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-673300
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-673300: (1.2234867s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-660400 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1213 09:39:46.026054    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:40:21.993234    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-660400 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m19.7148652s)
--- PASS: TestJSONOutput/start/Command (79.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.15s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-660400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-660400 --output=json --user=testUser: (1.1539406s)
--- PASS: TestJSONOutput/pause/Command (1.15s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-660400 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.95s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.19s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-660400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-660400 --output=json --user=testUser: (12.1856974s)
--- PASS: TestJSONOutput/stop/Command (12.19s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.67s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-669800 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-669800 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (211.7776ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"597e8f2b-76eb-499b-adfc-01d5fe438a4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-669800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"175aa16a-0438-4e28-84da-0a8f8e709dea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"ba6c5a8b-0d2e-43f0-8c96-85b301b2676f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4819e544-5bc0-43fa-952c-7adb382850f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5615f9f5-64fc-4f64-8f6b-ccbd4695c2a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"344c9b8f-b45d-4543-92ce-4db3098ab075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e0c9054-b30b-40b0-b5b9-abb58fcb8c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-669800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-669800
--- PASS: TestErrorJSONOutput (0.67s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (55.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-115200 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-115200 --network=: (51.8093334s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-115200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-115200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-115200: (3.5605507s)
--- PASS: TestKicCustomNetwork/create_custom_network (55.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (54.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-390800 --network=bridge
E1213 09:41:45.065133    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:19.766370    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-390800 --network=bridge: (51.651296s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-390800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-390800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-390800: (3.2090897s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (54.92s)

                                                
                                    
x
+
TestKicExistingNetwork (55.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 09:42:31.054504    2968 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 09:42:31.111203    2968 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 09:42:31.115163    2968 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 09:42:31.115227    2968 cli_runner.go:164] Run: docker network inspect existing-network
W1213 09:42:31.177710    2968 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 09:42:31.177710    2968 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 09:42:31.177710    2968 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 09:42:31.181851    2968 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 09:42:31.251696    2968 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f0eed0}
I1213 09:42:31.251696    2968 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1213 09:42:31.254982    2968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1213 09:42:31.313931    2968 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1213 09:42:31.314051    2968 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1213 09:42:31.314082    2968 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1213 09:42:31.337904    2968 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1213 09:42:31.352539    2968 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014ebb30}
I1213 09:42:31.352539    2968 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 09:42:31.355944    2968 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 09:42:31.507351    2968 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-420500 --network=existing-network
E1213 09:42:36.688153    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-420500 --network=existing-network: (51.5976599s)
helpers_test.go:176: Cleaning up "existing-network-420500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-420500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-420500: (3.2685577s)
I1213 09:43:26.442643    2968 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (55.45s)

                                                
                                    
x
+
TestKicCustomSubnet (52.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-200000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-200000 --subnet=192.168.60.0/24: (49.1652995s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-200000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-200000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-200000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-200000: (3.5298334s)
--- PASS: TestKicCustomSubnet (52.76s)

                                                
                                    
x
+
TestKicStaticIP (54.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-909900 --static-ip=192.168.200.200
E1213 09:44:46.029986    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-909900 --static-ip=192.168.200.200: (50.9154954s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-909900 ip
helpers_test.go:176: Cleaning up "static-ip-909900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-909900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-909900: (3.4503577s)
--- PASS: TestKicStaticIP (54.68s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (101.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-781500 --driver=docker
E1213 09:45:21.996390    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-781500 --driver=docker: (47.2769101s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-781500 --driver=docker
E1213 09:46:09.109344    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-781500 --driver=docker: (44.3980797s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-781500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1677668s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-781500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1792891s)
helpers_test.go:176: Cleaning up "second-781500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-781500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-781500: (3.7614357s)
helpers_test.go:176: Cleaning up "first-781500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-781500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-781500: (3.6473183s)
--- PASS: TestMinikubeProfile (101.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-806000 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2572660771\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-806000 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2572660771\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.7683018s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-806000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-806000 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2572660771\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-806000 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial2572660771\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.8329968s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-806000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.52s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-806000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-806000 --alsologtostderr -v=5: (2.4287243s)
--- PASS: TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-806000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.53s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-806000
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-806000: (1.8650324s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-806000
E1213 09:47:36.691723    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-806000: (9.8740264s)
--- PASS: TestMountStart/serial/RestartStopped (10.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-806000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1213 09:49:46.033908    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m10.2124747s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- rollout status deployment/busybox: (4.0252212s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-77f84 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-bmdhj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-77f84 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-bmdhj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-77f84 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-bmdhj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-77f84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-77f84 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-bmdhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-905400 -- exec busybox-7b57f96db7-bmdhj -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-905400 -v=5 --alsologtostderr
E1213 09:50:22.000468    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-905400 -v=5 --alsologtostderr: (52.8664905s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr: (1.3340969s)
--- PASS: TestMultiNode/serial/AddNode (54.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-905400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3779541s)
--- PASS: TestMultiNode/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 status --output json --alsologtostderr: (1.3110359s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp testdata\cp-test.txt multinode-905400:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile699945452\001\cp-test_multinode-905400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400:/home/docker/cp-test.txt multinode-905400-m02:/home/docker/cp-test_multinode-905400_multinode-905400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test_multinode-905400_multinode-905400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400:/home/docker/cp-test.txt multinode-905400-m03:/home/docker/cp-test_multinode-905400_multinode-905400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test_multinode-905400_multinode-905400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp testdata\cp-test.txt multinode-905400-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile699945452\001\cp-test_multinode-905400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m02:/home/docker/cp-test.txt multinode-905400:/home/docker/cp-test_multinode-905400-m02_multinode-905400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test_multinode-905400-m02_multinode-905400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m02:/home/docker/cp-test.txt multinode-905400-m03:/home/docker/cp-test_multinode-905400-m02_multinode-905400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test_multinode-905400-m02_multinode-905400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp testdata\cp-test.txt multinode-905400-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile699945452\001\cp-test_multinode-905400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m03:/home/docker/cp-test.txt multinode-905400:/home/docker/cp-test_multinode-905400-m03_multinode-905400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400 "sudo cat /home/docker/cp-test_multinode-905400-m03_multinode-905400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 cp multinode-905400-m03:/home/docker/cp-test.txt multinode-905400-m02:/home/docker/cp-test_multinode-905400-m03_multinode-905400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 ssh -n multinode-905400-m02 "sudo cat /home/docker/cp-test_multinode-905400-m03_multinode-905400-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 node stop m03: (1.6635664s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-905400 status: exit status 7 (1.0394292s)

                                                
                                                
-- stdout --
	multinode-905400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr: exit status 7 (1.0458797s)

                                                
                                                
-- stdout --
	multinode-905400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:51:22.505107   11340 out.go:360] Setting OutFile to fd 896 ...
	I1213 09:51:22.551580   11340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:22.552104   11340 out.go:374] Setting ErrFile to fd 1736...
	I1213 09:51:22.552104   11340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:22.562547   11340 out.go:368] Setting JSON to false
	I1213 09:51:22.562547   11340 mustload.go:66] Loading cluster: multinode-905400
	I1213 09:51:22.562547   11340 notify.go:221] Checking for updates...
	I1213 09:51:22.563548   11340 config.go:182] Loaded profile config "multinode-905400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:51:22.563548   11340 status.go:174] checking status of multinode-905400 ...
	I1213 09:51:22.570569   11340 cli_runner.go:164] Run: docker container inspect multinode-905400 --format={{.State.Status}}
	I1213 09:51:22.636303   11340 status.go:371] multinode-905400 host status = "Running" (err=<nil>)
	I1213 09:51:22.636303   11340 host.go:66] Checking if "multinode-905400" exists ...
	I1213 09:51:22.640870   11340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-905400
	I1213 09:51:22.697947   11340 host.go:66] Checking if "multinode-905400" exists ...
	I1213 09:51:22.702651   11340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:51:22.706390   11340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-905400
	I1213 09:51:22.761230   11340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50829 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-905400\id_rsa Username:docker}
	I1213 09:51:22.898932   11340 ssh_runner.go:195] Run: systemctl --version
	I1213 09:51:22.915107   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:51:22.941338   11340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-905400
	I1213 09:51:22.997478   11340 kubeconfig.go:125] found "multinode-905400" server: "https://127.0.0.1:50833"
	I1213 09:51:22.997478   11340 api_server.go:166] Checking apiserver status ...
	I1213 09:51:23.001491   11340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:51:23.023163   11340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2315/cgroup
	I1213 09:51:23.036167   11340 api_server.go:182] apiserver freezer: "7:freezer:/docker/5bc02be81967ad3380c39d120dcfd3bdd93a6d2f3a0b69a14eec3f7989222c7c/kubepods/burstable/pode5ce1a35be1c2df36eac8ca792fad18f/7d1c3dbadbf686bd79247665fefc4858db2c410744c28a24348439cf1ea55928"
	I1213 09:51:23.041378   11340 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5bc02be81967ad3380c39d120dcfd3bdd93a6d2f3a0b69a14eec3f7989222c7c/kubepods/burstable/pode5ce1a35be1c2df36eac8ca792fad18f/7d1c3dbadbf686bd79247665fefc4858db2c410744c28a24348439cf1ea55928/freezer.state
	I1213 09:51:23.056207   11340 api_server.go:204] freezer state: "THAWED"
	I1213 09:51:23.056207   11340 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50833/healthz ...
	I1213 09:51:23.068463   11340 api_server.go:279] https://127.0.0.1:50833/healthz returned 200:
	ok
	I1213 09:51:23.068463   11340 status.go:463] multinode-905400 apiserver status = Running (err=<nil>)
	I1213 09:51:23.068463   11340 status.go:176] multinode-905400 status: &{Name:multinode-905400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:51:23.068463   11340 status.go:174] checking status of multinode-905400-m02 ...
	I1213 09:51:23.075650   11340 cli_runner.go:164] Run: docker container inspect multinode-905400-m02 --format={{.State.Status}}
	I1213 09:51:23.131553   11340 status.go:371] multinode-905400-m02 host status = "Running" (err=<nil>)
	I1213 09:51:23.131553   11340 host.go:66] Checking if "multinode-905400-m02" exists ...
	I1213 09:51:23.136476   11340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-905400-m02
	I1213 09:51:23.190761   11340 host.go:66] Checking if "multinode-905400-m02" exists ...
	I1213 09:51:23.195812   11340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:51:23.199621   11340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-905400-m02
	I1213 09:51:23.255924   11340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50880 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-905400-m02\id_rsa Username:docker}
	I1213 09:51:23.375195   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:51:23.392093   11340 status.go:176] multinode-905400-m02 status: &{Name:multinode-905400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:51:23.392093   11340 status.go:174] checking status of multinode-905400-m03 ...
	I1213 09:51:23.399549   11340 cli_runner.go:164] Run: docker container inspect multinode-905400-m03 --format={{.State.Status}}
	I1213 09:51:23.452736   11340 status.go:371] multinode-905400-m03 host status = "Stopped" (err=<nil>)
	I1213 09:51:23.452736   11340 status.go:384] host is not running, skipping remaining checks
	I1213 09:51:23.452736   11340 status.go:176] multinode-905400-m03 status: &{Name:multinode-905400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 node start m03 -v=5 --alsologtostderr: (11.80179s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 status -v=5 --alsologtostderr: (1.3123818s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-905400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-905400
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-905400: (24.8470252s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true -v=5 --alsologtostderr
E1213 09:52:36.695276    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true -v=5 --alsologtostderr: (56.894534s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-905400
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 node delete m03: (6.9549453s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr: (1.0794044s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-905400 stop: (23.4689954s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-905400 status: exit status 7 (267.7657ms)

                                                
                                                
-- stdout --
	multinode-905400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-905400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr: exit status 7 (273.7334ms)

                                                
                                                
-- stdout --
	multinode-905400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-905400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:53:30.954869   10656 out.go:360] Setting OutFile to fd 1372 ...
	I1213 09:53:30.998476   10656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:53:30.998476   10656 out.go:374] Setting ErrFile to fd 2016...
	I1213 09:53:30.998476   10656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:53:31.009488   10656 out.go:368] Setting JSON to false
	I1213 09:53:31.009488   10656 mustload.go:66] Loading cluster: multinode-905400
	I1213 09:53:31.009488   10656 notify.go:221] Checking for updates...
	I1213 09:53:31.009488   10656 config.go:182] Loaded profile config "multinode-905400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1213 09:53:31.010493   10656 status.go:174] checking status of multinode-905400 ...
	I1213 09:53:31.017572   10656 cli_runner.go:164] Run: docker container inspect multinode-905400 --format={{.State.Status}}
	I1213 09:53:31.072277   10656 status.go:371] multinode-905400 host status = "Stopped" (err=<nil>)
	I1213 09:53:31.073276   10656 status.go:384] host is not running, skipping remaining checks
	I1213 09:53:31.073276   10656 status.go:176] multinode-905400 status: &{Name:multinode-905400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:53:31.073276   10656 status.go:174] checking status of multinode-905400-m02 ...
	I1213 09:53:31.081860   10656 cli_runner.go:164] Run: docker container inspect multinode-905400-m02 --format={{.State.Status}}
	I1213 09:53:31.136072   10656 status.go:371] multinode-905400-m02 host status = "Stopped" (err=<nil>)
	I1213 09:53:31.136072   10656 status.go:384] host is not running, skipping remaining checks
	I1213 09:53:31.136072   10656 status.go:176] multinode-905400-m02 status: &{Name:multinode-905400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true -v=5 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-905400 --wait=true -v=5 --alsologtostderr --driver=docker: (1m0.1874291s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-905400 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-905400
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-905400-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-905400-m02 --driver=docker: exit status 14 (203.2257ms)

                                                
                                                
-- stdout --
	* [multinode-905400-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-905400-m02' is duplicated with machine name 'multinode-905400-m02' in profile 'multinode-905400'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-905400-m03 --driver=docker
E1213 09:54:46.037903    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-905400-m03 --driver=docker: (45.7418833s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-905400
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-905400: exit status 80 (659.1235ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-905400 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-905400-m03 already exists in multinode-905400-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_16.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-905400-m03
E1213 09:55:22.004507    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-905400-m03: (3.5447988s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.30s)

                                                
                                    
x
+
TestPreload (164.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-648500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-648500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m32.5501229s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-648500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-648500 image pull gcr.io/k8s-minikube/busybox: (2.182743s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-648500
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-648500: (11.9730816s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-648500 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
E1213 09:57:36.699692    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-648500 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (53.7842948s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-648500 image list
helpers_test.go:176: Cleaning up "test-preload-648500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-648500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-648500: (3.7481328s)
--- PASS: TestPreload (164.71s)

                                                
                                    
x
+
TestScheduledStopWindows (111.66s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-623600 --memory=3072 --driver=docker
E1213 09:58:25.079595    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:58:59.781861    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-623600 --memory=3072 --driver=docker: (45.5007741s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-623600 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-623600 -n scheduled-stop-623600
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-623600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-623600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-623600 --schedule 5s: (1.0647501s)
minikube stop output:

                                                
                                                
E1213 09:59:46.041375    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-623600
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-623600: exit status 7 (230.0597ms)

                                                
                                                
-- stdout --
	scheduled-stop-623600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-623600 -n scheduled-stop-623600
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-623600 -n scheduled-stop-623600: exit status 7 (216.7858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-623600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-623600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-623600: (2.4711097s)
--- PASS: TestScheduledStopWindows (111.66s)

                                                
                                    
x
+
TestInsufficientStorage (27.49s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-453400 --memory=3072 --output=json --wait=true --driver=docker
E1213 10:00:22.007388    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-453400 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (23.6559873s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"84c5b778-c802-400b-96c7-1ad045061ad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-453400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"448bbb48-fd2e-4e97-95c9-1d7cd6a7ef5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"804869f9-5f0f-465a-9262-2b602bc52d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6af109ad-3b3a-4305-975d-e710fca709c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"50c24629-2909-4ab3-be69-3d01c1bc3acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"081ec4ff-9c4e-4721-85cf-6468cda9e76d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"592531cc-ac37-4570-96f5-41b3b05cdb11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eb4de729-81ba-4647-b1fa-5a0b7ac8124d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8c63a9c3-9512-4495-98dc-34035f671d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aeed77be-c453-4958-af43-f6dc7f6c5e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ab9b1a34-f8d5-42e4-8e58-e5305123580f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-453400\" primary control-plane node in \"insufficient-storage-453400\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a850b536-1b27-4d72-af8d-85ff1667eed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"920d086b-f858-4d15-959f-f83b0344e5d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f051a6b2-7e27-401e-b280-1ef9f323ce71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-453400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-453400 --output=json --layout=cluster: exit status 7 (561.9947ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-453400","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-453400","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:30.137452    3068 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-453400" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-453400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-453400 --output=json --layout=cluster: exit status 7 (557.4925ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-453400","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-453400","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:30.692735    9308 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-453400" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1213 10:00:30.717803    9308 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-453400\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-453400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-453400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-453400: (2.7106882s)
--- PASS: TestInsufficientStorage (27.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (372.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3907871233.exe start -p running-upgrade-136300 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3907871233.exe start -p running-upgrade-136300 --memory=3072 --vm-driver=docker: (53.3418666s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-136300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-136300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (5m14.6094195s)
helpers_test.go:176: Cleaning up "running-upgrade-136300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-136300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-136300: (3.5938258s)
--- PASS: TestRunningBinaryUpgrade (372.36s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3177341478.exe start -p missing-upgrade-671600 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3177341478.exe start -p missing-upgrade-671600 --memory=3072 --driver=docker: (46.9686108s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-671600
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-671600: (10.8669622s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-671600
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-671600 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-671600 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m1.2835455s)
helpers_test.go:176: Cleaning up "missing-upgrade-671600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-671600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-671600: (9.2137364s)
--- PASS: TestMissingContainerUpgrade (130.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (255.5451ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-313900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m28.5097972s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-313900 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (407.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1890368709.exe start -p stopped-upgrade-313900 --memory=3072 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1890368709.exe start -p stopped-upgrade-313900 --memory=3072 --vm-driver=docker: (2m3.9031251s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1890368709.exe -p stopped-upgrade-313900 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1890368709.exe -p stopped-upgrade-313900 stop: (7.3115901s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-313900 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-313900 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m36.0952167s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (407.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (18.585603s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-313900 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-313900 status -o json: exit status 2 (653.2165ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-313900","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-313900
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-313900: (2.9099265s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (20.6040984s)
--- PASS: TestNoKubernetes/serial/Start (20.60s)

                                                
                                    
x
+
TestPause/serial/Start (86.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-442700 --memory=3072 --install-addons=false --wait=all --driver=docker
E1213 10:02:36.703307    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-442700 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m26.0490921s)
--- PASS: TestPause/serial/Start (86.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-313900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-313900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (632.432ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
E1213 10:02:49.124227    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (7.6005964s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.6749466s)
--- PASS: TestNoKubernetes/serial/ProfileList (10.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-313900
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-313900: (2.0067463s)
--- PASS: TestNoKubernetes/serial/Stop (2.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-313900 --driver=docker: (10.498666s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-313900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-313900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (563.1871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-442700 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-442700 --alsologtostderr -v=1 --driver=docker: (46.2488324s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.27s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-442700 --alsologtostderr -v=5
E1213 10:04:46.044768    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-442700 --alsologtostderr -v=5: (1.03446s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-442700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-442700 --output=json --layout=cluster: exit status 2 (627.3758ms)

                                                
                                                
-- stdout --
	{"Name":"pause-442700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-442700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.63s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-442700 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.85s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-442700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-442700 --alsologtostderr -v=5: (1.6166825s)
--- PASS: TestPause/serial/PauseAgain (1.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (9.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-442700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-442700 --alsologtostderr -v=5: (9.574039s)
--- PASS: TestPause/serial/DeletePaused (9.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (17.4906732s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-442700
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-442700: exit status 1 (68.5101ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-442700: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (17.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-313900
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-313900: (1.3923195s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-987400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-987400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m3.3081494s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-987400 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c1abdf57-8833-4a72-b950-7ff050f007fb] Pending
helpers_test.go:353: "busybox" [c1abdf57-8833-4a72-b950-7ff050f007fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c1abdf57-8833-4a72-b950-7ff050f007fb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.0059937s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-987400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-987400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-987400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.7761918s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-987400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-987400 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-987400 --alsologtostderr -v=3: (20.3331277s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-818600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-818600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m26.2634143s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-987400 -n old-k8s-version-987400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-987400 -n old-k8s-version-987400: exit status 7 (217.7552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-987400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (33.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-987400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1213 10:10:22.015636    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-987400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (33.0383649s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-987400 -n old-k8s-version-987400
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (33.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-n98z4" [4a607dad-4017-4f5b-bc6c-3cd2c6de4606] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-n98z4" [4a607dad-4017-4f5b-bc6c-3cd2c6de4606] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.0062762s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-n98z4" [4a607dad-4017-4f5b-bc6c-3cd2c6de4606] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0055667s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-987400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-987400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-987400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-987400 --alsologtostderr -v=1: (1.1205717s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-987400 -n old-k8s-version-987400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-987400 -n old-k8s-version-987400: exit status 2 (636.9483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-987400 -n old-k8s-version-987400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-987400 -n old-k8s-version-987400: exit status 2 (622.5542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-987400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-987400 --alsologtostderr -v=1: (1.0260762s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-987400 -n old-k8s-version-987400
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-987400 -n old-k8s-version-987400: (1.0695481s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-987400 -n old-k8s-version-987400
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-818600 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a60878e8-c952-43be-a426-93f79eadf869] Pending
helpers_test.go:353: "busybox" [a60878e8-c952-43be-a426-93f79eadf869] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a60878e8-c952-43be-a426-93f79eadf869] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0060718s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-818600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-818600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-818600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2945131s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-818600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-818600 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-818600 --alsologtostderr -v=3: (12.3147268s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600: exit status 7 (206.7545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-818600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-818600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-818600 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (47.8758645s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-967q8" [f168280d-4061-4eba-acd3-8b4a9211695f] Running
E1213 10:12:36.712143    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0101313s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-967q8" [f168280d-4061-4eba-acd3-8b4a9211695f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0066012s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-818600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-818600 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-818600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-818600 --alsologtostderr -v=1: (1.2119384s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600: exit status 2 (619.2131ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600: exit status 2 (614.7513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-818600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-818600 --alsologtostderr -v=1: (1.0584333s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-818600 -n default-k8s-diff-port-818600
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-053300 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-053300 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (1m18.7061163s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-053300 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c6162f1e-f78f-4935-a1b6-51d2a0fe9b7f] Pending
helpers_test.go:353: "busybox" [c6162f1e-f78f-4935-a1b6-51d2a0fe9b7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c6162f1e-f78f-4935-a1b6-51d2a0fe9b7f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0078706s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-053300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-053300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-053300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2812049s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-053300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-053300 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-053300 --alsologtostderr -v=3: (12.1800485s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-053300 -n embed-certs-053300
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-053300 -n embed-certs-053300: exit status 7 (215.6216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-053300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-053300 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
E1213 10:14:36.878300    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:36.885278    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:36.897270    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:36.919631    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:36.961052    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:37.044045    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:37.206270    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:37.528153    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:38.170348    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:39.452877    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:42.014437    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:46.054516    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:47.137351    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:14:57.379503    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:15:05.095379    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:15:17.862005    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:15:22.020196    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-482100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-053300 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (48.4932607s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-053300 -n embed-certs-053300
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pmkrw" [68311c80-3dc3-4370-bec0-456c81b3749e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0074327s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-pmkrw" [68311c80-3dc3-4370-bec0-456c81b3749e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0090047s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-053300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-053300 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-053300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-053300 --alsologtostderr -v=1: (1.1624273s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-053300 -n embed-certs-053300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-053300 -n embed-certs-053300: exit status 2 (645.7057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-053300 -n embed-certs-053300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-053300 -n embed-certs-053300: exit status 2 (619.8938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-053300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-053300 --alsologtostderr -v=1: (1.0044019s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-053300 -n embed-certs-053300
E1213 10:15:39.797694    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-053300 -n embed-certs-053300
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E1213 10:15:58.824666    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:17.838183    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:17.845392    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:17.857130    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:17.879122    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:17.921707    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:18.004221    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:18.166362    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:18.488177    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:19.130569    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:20.413508    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:22.976353    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:28.098901    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:38.341247    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:16:58.823717    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m25.4596805s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-416400 "pgrep -a kubelet"
I1213 10:17:12.046491    2968 config.go:182] Loaded profile config "auto-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-9vnhc" [51ce66af-488b-4b5e-98d3-66e502e8838d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:17:20.747762    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-9vnhc" [51ce66af-488b-4b5e-98d3-66e502e8838d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.0133659s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m18.2132361s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-jvbh6" [3bf68d38-1eab-448b-a3ed-1b24ab2cc74a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0060646s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-416400 "pgrep -a kubelet"
I1213 10:19:24.699234    2968 config.go:182] Loaded profile config "kindnet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5tjsr" [503bdf03-8569-41a1-9b0b-4acc274ac933] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:19:29.140963    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-5tjsr" [503bdf03-8569-41a1-9b0b-4acc274ac933] Running
E1213 10:19:36.883405    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.0067828s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-803600 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-803600 --alsologtostderr -v=3: (1.8832511s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-803600 -n no-preload-803600: exit status 7 (205.9946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-803600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (111.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m51.6845551s)
--- PASS: TestNetworkPlugins/group/calico/Start (111.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1213 10:21:17.842545    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m11.6616278s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-416400 "pgrep -a kubelet"
I1213 10:21:42.761493    2968 config.go:182] Loaded profile config "custom-flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pmgv2" [7ba78d8a-c973-47df-b204-abefaa62ea42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:21:45.553625    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-pmgv2" [7ba78d8a-c973-47df-b204-abefaa62ea42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.0064181s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-307000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-307000 --alsologtostderr -v=3: (1.8997861s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-307000 -n newest-cni-307000: exit status 7 (225.6666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-307000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-v25mq" [f524f291-f779-4efd-8ebb-973086872a70] Running
E1213 10:22:12.547473    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.554485    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.567479    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.590472    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.633493    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.716479    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:12.879480    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0065445s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-416400 "pgrep -a kubelet"
E1213 10:22:13.202480    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1213 10:22:13.575496    2968 config.go:182] Loaded profile config "calico-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-416400 replace --force -f testdata\netcat-deployment.yaml
E1213 10:22:13.845509    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-r9dd9" [39555adf-f184-4d6c-9be4-22d35f5ae7f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:22:15.128445    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:17.690199    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:22.813222    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-r9dd9" [39555adf-f184-4d6c-9be4-22d35f5ae7f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.0063572s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E1213 10:22:36.719834    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-213400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:53.538499    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m25.4867913s)
--- PASS: TestNetworkPlugins/group/false/Start (85.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E1213 10:23:34.500911    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m24.9514153s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-416400 "pgrep -a kubelet"
I1213 10:24:02.580919    2968 config.go:182] Loaded profile config "false-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vzcpv" [da77c8b4-c1a7-4b2e-bb40-6dcfc2fac33d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vzcpv" [da77c8b4-c1a7-4b2e-bb40-6dcfc2fac33d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.0061284s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-416400 exec deployment/netcat -- nslookup kubernetes.default
E1213 10:24:18.139194    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:18.145784    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:18.158284    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:18.179855    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:18.221310    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1213 10:24:18.303223    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
E1213 10:24:18.465030    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:264: (dbg) Run:  kubectl --context false-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-416400 "pgrep -a kubelet"
I1213 10:24:33.471870    2968 config.go:182] Loaded profile config "enable-default-cni-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vs7z6" [bd89a732-2487-4347-8c1e-b837fc9f0ba6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:24:36.887535    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-987400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:38.638517    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-vs7z6" [bd89a732-2487-4347-8c1e-b837fc9f0ba6] Running
E1213 10:24:46.062283    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-612900\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.0071355s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E1213 10:24:56.423673    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:59.120776    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m19.9135691s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1213 10:25:40.083652    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m29.6508549s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-5mv4p" [ff8c6eca-d4b5-4206-82e7-bc3b774f7bd2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0089245s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-416400 "pgrep -a kubelet"
E1213 10:26:17.846546    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-818600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1213 10:26:18.064321    2968 config.go:182] Loaded profile config "flannel-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rcqch" [73f8c7dd-8c8e-4a04-8f91-88fee1ba1880] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rcqch" [73f8c7dd-8c8e-4a04-8f91-88fee1ba1880] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.0073644s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-416400 "pgrep -a kubelet"
I1213 10:26:53.362836    2968 config.go:182] Loaded profile config "bridge-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-416400 replace --force -f testdata\netcat-deployment.yaml
E1213 10:26:53.554248    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nh944" [641cd123-5012-4726-995c-771b37dcbdee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:27:02.007150    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-nh944" [641cd123-5012-4726-995c-771b37dcbdee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.010155s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-416400 exec deployment/netcat -- nslookup kubernetes.default
E1213 10:27:08.190327    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (93.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-416400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m33.0101132s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (93.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-307000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-416400 "pgrep -a kubelet"
I1213 10:28:41.948397    2968 config.go:182] Loaded profile config "kubenet-416400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-416400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rlwfn" [d410990d-5b13-4300-b2bc-ba086dbb0587] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rlwfn" [d410990d-5b13-4300-b2bc-ba086dbb0587] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.0058956s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-416400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-416400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)
E1213 10:29:33.937818    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:33.944515    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:33.956381    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:33.978787    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:34.021043    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:34.103060    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:34.265466    2968 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-416400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test skip (35/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 28.42
46 TestAddons/parallel/Ingress 27.28
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.01
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 10.29
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 0.51
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
374 TestStartStop/group/disable-driver-mounts 0.45
394 TestNetworkPlugins/group/cilium 10.44
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.3133ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-sdks8" [fe10db1a-d0c5-4003-9e88-24234af9478d] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0073455s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-hvps8" [5f552c63-9b6b-43eb-a14c-730706c2809d] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0059563s
addons_test.go:394: (dbg) Run:  kubectl --context addons-612900 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-612900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-612900 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.0160156s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable registry --alsologtostderr -v=1: (1.2091139s)
--- SKIP: TestAddons/parallel/Registry (28.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-612900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-612900 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-612900 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [8bc1c7e2-6173-404c-8e25-03904405df45] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [8bc1c7e2-6173-404c-8e25-03904405df45] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0066635s
I1213 08:36:13.300457    2968 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable ingress-dns --alsologtostderr -v=1: (2.3049765s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-612900 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-612900 addons disable ingress --alsologtostderr -v=1: (9.0636214s)
--- SKIP: TestAddons/parallel/Ingress (27.28s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-213400 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-213400 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 10124: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-213400 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-213400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-mrqmg" [30e4c729-a802-4be9-b6e7-49b5e919d06c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-mrqmg" [30e4c729-a802-4be9-b6e7-49b5e919d06c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0052387s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-482100 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-482100 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 7684: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-889700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-889700
--- SKIP: TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-416400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-416400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:03:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:51909
name: stopped-upgrade-313900
contexts:
- context:
cluster: stopped-upgrade-313900
user: stopped-upgrade-313900
name: stopped-upgrade-313900
current-context: stopped-upgrade-313900
kind: Config
users:
- name: stopped-upgrade-313900
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-313900/client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-313900/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-416400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-416400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416400"

                                                
                                                
----------------------- debugLogs end: cilium-416400 [took: 9.9160507s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-416400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-416400
--- SKIP: TestNetworkPlugins/group/cilium (10.44s)

                                                
                                    
Copied to clipboard